The Chief Marketing Officer and the CDO – A Modern Fable

The Fox and the Grapes

This Fox has a longing for grapes:
He jumps, but the bunch still escapes.
So he goes away sour;
And, ’tis said, to this hour
Declares that he’s no taste for grapes.

— W.J.Linton (after Aesop)

Note:

Not all of the organisations I have worked with or for have had a C-level Executive accountable primarily for Marketing. Where they have, I have normally found the people holding these roles to be better informed about data matters than their peers. I have always found it easy and enjoyable to collaborate with such people. The same goes in general for Marketing Managers. This article is not about Marketing professionals, it is about poorly researched journalism.


 
Prelude…

The Decline and Fall of the CDO Empire?

I recently came across an article in Marketing Week with the clickbait-worthy headline of Why the rise of the chief data officer will be short-lived (their choice of capitalisation). The subhead continues in the same vein:

Chief data officers (ditto) are becoming increasingly common, but for a data strategy to work their appointments can only ever be a temporary fix.

Intrigued, I felt I had to avail myself of the wisdom and domain expertise contained in the article (the clickbait worked of course). The first few paragraphs reveal the actual motivation. The piece is a reaction [1] to the most senior Marketing person at easyJet being moved out of his role, which is being abolished, and – as part of the same reorganisation – a Chief Data Officer (CDO) being appointed. Now the first thing to say, based on the article’s introductory comments, is that easyJet did not have a Chief Marketing Officer. The role that was abolished was instead Chief Commercial Officer, so there was no one charged full-time with Marketing anyway. The Marketing responsibilities previously supported part-time by the CCO have now been spread among other executives.

The next part of the article covers the views of a Marketing Week columnist (pause for irony) before moving on to arrangements for the management of data matters in three UK-based organisations:

  • Camelot – who run the UK National Lottery
     
  • Mumsnet – which is a web-site for UK parents
     
  • Flubit – a growing on-line marketplace aiming to compete with Amazon

The first two of these have CDOs (albeit with one doing the role alongside other responsibilities). Both of these people:

[…] come at data as people with backgrounds in its use in marketing

Flubit does not have a CDO, which is used as supporting evidence for the superfluous nature of the role [2].

Suffice it to say that a straw poll consisting of the handful of organisations that the journalist was able to get a comment from is not the most robust of approaches [3]. Most of the time, the article does nothing more than to reflect the continuing confusion about whether or not organisations need CDOs and – assuming that they do – what their remit should be and who they should report to [4].

But then, without it has to be said much supporting evidence, the piece goes on to add that:

Most [CDOs – they would probably style it “Cdos”] are brought in to instill a data strategy across the business; once that is done their role should no longer be needed.

Symmetry

Now as a Group Theoretician, I am a great fan of symmetry. Symmetry relates to properties that remain invariant when something else is changed. Archetypally, an equilateral triangle is still an equilateral triangle when rotated by 120° [5]. More concretely, the laws of motion work just fine if we wind the clock forward 10 seconds (which incidentally leads to the principle of conservation of energy [6]).

Let’s assume that the Marketing Week assertion is true. I claim therefore that it must be still be true under the symmetry of changing the C-level role. This would mean that the following also has to be true:

Most [Chief marketing officers] are brought in to instill a marketing strategy across the business; once that is done their role should no longer be needed.

Now maybe this statement is indeed true. However, I can’t really see the guys and gals at Marketing Week agreeing with this. So maybe it’s false instead. Then – employing reductio ad absurdum – the initial statement is also false [7].

If you don’t work in Marketing, then maybe a further transformation will convince you:

Most [Chief financial officers] are brought in to instill a finance strategy across the business; once that is done their role should no longer be needed.

I could go on, but this is already becoming as tedious to write as it was to read the original Marketing Week claim. The closing sentence of the article is probably its most revealing and informative:

[…] marketers must make sure they are leading [the data] agenda, or someone else will do it for them.

I will leave readers to draw their own conclusions on the merits of this piece and move on to other thoughts that reading it spurred in me.


 
…and Fugue

Electrification

Sometimes buried in the strangest of places you can find something of value, even if the value is different to the intentions of the person who buried it. Around some of the CDO forums that I attend [8] there is occasionally talk about just the type of issue that Marketing Week raises. An historical role often comes up in these discussions is that of Chief Electrification Officer [9]. This supposedly was an Executive role in organisations as the 19th Century turned into the 20th and electricity grids began to be created. The person ostensibly filling this role would be responsible for shepherding the organisation’s transition from earlier forms of power (e.g. steam) to the new-fangled streams of electrons. Of course this role would be very important until the transition was completed, after that redundancy surely beckoned.

Well to my way of thinking, there are a couple of problems here. The first one of these is alluded to by my choice of the words “supposedly” and “ostensibly” above. I am not entirely sure, based on my initial research [10], that this role ever actually existed. All the references I can find to it are modern pieces comparing it to the CDO role, so perhaps it is apochryphal.

The second is somewhat related. Electrification was an engineering problem, indeed it the [US] National Academy of Engineering called it “the greatest engineering achievement of the 20th Century”. Surely the people tackling this would be engineers, potentially led by a Chief Engineer. Did the completion of electrification mean that there was no longer a need for engineers, or did they simply move on to the next engineering problem [11]?

Extending this analogy, I think that Chief Data Officers are more like Chief Engineers than Chief Electrification Officers, assuming that the latter even exists. Why the confusion? Well I think part of it is because, over the last decade and a bit, organisations have been conditioned to believe the one dimensional perspective that everything is a programme or a project [12]. I am less sure that this applies 100% to the CDO role.

It may well be that one thing that a CDO needs to get going is a data transformation programme. This may purely be focused on cultural aspects of how an organisation records, shares and otherwise uses data. It may be to build a new (or a first) Data Architecture. It may be to remediate issues with an existing Data Architecture. It may be to introduce or expand Data Governance. It may be to improve Data Quality. Or (and, in my experience, this is often the most likely) a combination of all these five, plus other work, such as rapid tactical or interim deliveries. However, there is also a large element of data-centric work which is not project-based and instead falls into the category often described as “business as usual” (I loathe this term – I think that Data Operations & Technology is preferable). A handful of examples are as follows (this is not meant to be an exhaustive list) [13]:

  1. Addressing architectural debt that results from neglect of a Data Assets or the frequently deleterious impact of improperly governed change portfolios [14]. This is often a series of small to medium-sized changes, rather than a project with a discrete scope and start and end dates.
     
  2. More positively, engaging proactively in the change process in an attempt to act as a steward of Data Assets.
     
  3. Establishing a regular Data Audit.
     
  4. Regular Data Management activities.
     
  5. Providing tailored Analytics to help understand some unscheduled or unexpected event.
     
  6. Establishment of a data “SWAT team” to respond to urgent architecture, quality or reporting needs.
     
  7. Running a Data Governance committee and related activities.
     
  8. Creating and managing a Data Science capability.
     
  9. Providing help and advice to those struggling to use Data facilities.
     
  10. Responding to new Data regulations.
     
  11. Creating and maintaining a target operating model for Data and is use.
     
  12. Supporting Data Services to aid systems integration.
     
  13. Production of regular reports and refreshing self-serve Data Repositories.
     
  14. Testing and re-testing of Data facilities subject to change or change in source Data.
     
  15. Providing training in the use of Data facilities or the importance of getting Data right-first-time.

The above all point to the need for an ongoing Data Function to meet these needs (and to form the core resources of any data programme / project work). I describe such a function in my series about The Anatomy of a Data Function.

Data Strategy

There are of course many other such examples, but instead of cataloguing each of them, let’s return to what Marketing Week describe as the central responsibility of a CDO, to formulate a Data Strategy. Surely this is a one-off activity, right?

Well is the Marketing strategy set once and then never changed? If there is some material shift in the overall Business strategy, might the Marketing strategy change as a result? What would be the impact on an existing Marketing strategy of insight showing that this was being less than effective; might this lead to the development of a new Marketing strategy? Would the Marketing strategy need to be revised to cater for new products and services, or new segments and territories? What would be the impact on the Marketing strategy of an acquisition or divestment?

As anyone who has spent significant time in the strategy arena will tell you, it is a fluid area. Things are never set in stone and strategies may need to be significantly revised or indeed abandoned and replaced with something entirely new as dictated by events. Strategy is not a fire and forget exercise, not if you want it to be relevant to your business today, as opposed to a year ago. Specifically with Data Strategy (as I explain in Building Momentum – How to begin becoming a Data-driven Organisation), I would recommend keeping it rather broad brush at the begining of its development, allowing it to be adpated based on feedback from initial interim work and thus ensuring it better meets business needs.

So expecting that a Data Strategy (or any other type of strategy) to be done and dusted, with the key strategist dispensed with, is probably rather naive.


 
Coda

Coda

It would be really nice to think that sorting out their Data problems and seizing their Data opportunities are things that organisations can do once and then forget about. With twenty years experience of helping organisations to become more Data-centric, often with technical matters firmly in the background, I have to disabuse people of this all too frequent misconception. To adapt the National Canine Defence League’s [15 long-lived slogan from 1978:

A Chief Data Officer is for life, not just for Christmas.

With that out of the way, I’m off to write a well-informed and insightful article about how Marketing Departments should go about their business. Wish me luck!
 


 
Notes

 
[1]
 
I first wrote “knee-jerk reaction” and then thought that maybe I was being unkind. “When they go low, we go high” is a better maxim. Note: link opens a YouTube video.
 
[2]
 
I am sure that I read somewhere about the importance of the number of data points in any analysis, maybe I should ask a Data Scientist to remind me about this.
 
[3]
 
For a more balanced view of what real CDOs do, please take a look at my ongoing series of in-depth interviews.
 
[4]
 
As discussed in:

 
[5]
 
See Glimpses of Symmetry, Chapter 3 – Shifting Shapes for more on the properties of equilateral triangles.
 
[6]
 
As demonstrated by Emmy Noether in 1915.
 
[7]
 
At this point I think I am meant to say “Fake news! SAD!!!”
 
[8]
 
The [informal] proceedings of some of these may be viewed at:

 
[9]
 
Or Chief Electrical Officer, or Chief Electricity Officer.
 
[10]
 
I am doing some more digging and will of course update this piece should I find the evidence that has so far been elusive.
 
[11]
 
Self-driving electric cars come to mind of course. That or running a Starship.

Scotty

 
[12]
 
As an aside, where do Programme Managers go when (or should that be if) their Programmes finish?
 
[13]
 
It might be argued that some of these operational functions could be handed to IT. However, given that some elements of data functions have probably been carved out of IT in the past, this might be a retrograde step.
 
[14]
 
See Bumps in the Road.
 
[15]
 
Now Dogs Trust.

 


From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases

 

More Definitions in the Data and Analytics Dictionary

The Data and Analytics Dictionary

The peterjamesthomas.com Data and Analytics Dictionary is an active document and I will continue to issue revised versions of it periodically. Here are 20 new definitions, including the first from other contributors (thanks Tenny!):

  1. Artificial Intelligence Platform
  2. Data Asset
  3. Data Audit
  4. Data Classification
  5. Data Consistency
  6. Data Controls
  7. Data Curation (contributor: Tenny Thomas Soman)
  8. Data Democratisation
  9. Data Dictionary
  10. Data Engineering
  11. Data Ethics
  12. Data Integrity
  13. Data Lineage
  14. Data Platform
  15. Data Strategy
  16. Data Wrangling (contributor: Tenny Thomas Soman)
  17. Explainable AI (contributor: Tenny Thomas Soman)
  18. Information Governance
  19. Referential Integrity
  20. Testing Data (Training Data)

Remember that The Dictionary is a free resource and quoting contents (ideally with acknowledgement) and linking to its entries (via the buttons provided) are both encouraged.

People are now also welcome to contribute their own definitions. You can use the comments section here, or the dedicated form. Submissions will be subject to editorial review and are not guaranteed to be accepted.
 


 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases

 

Data Science Challenges – It’s Deja Vu all over again!

The late Yogi Berra

The rather famous tautology, “It’s déjà vu all over again”, has of course been ascribed to that darling of malapropisms, baseball catcher Yogi Berra [1]. The phrase came to mind for me today when coming across the following exhibit:

Business Over Broadway - Kaggle Survey (Click to view a larger version in a new window)

© Business Over Broadway (2018). Based on Kaggle’s State of Data Science Survey 2017 (Sample size: 10,153).

The text in the above exhibit is not that clear [2], so here are the 20 top challenges [3] faced by those running Data Science teams in human-readable form:

# Challenge Cited by
1 Dirty Data 35.9%
2 Lack of Data Science talent in the organization 30.2%
3 Company politics / Lack of management/financial support for a Data Science team 27.0%
4 The lack of a clear question to be answering or a clear direction to go with available data 22.1%
5 Unavailability of/difficult access to data 22.0%
6 Data Science results not used by business decision makers 17.7%
7 Explaining Data Science to others 16.0%
8 Privacy issues 14.4%
9 Lack of significant domain expert input 14.2%
10 Organization is small and cannot afford a Data Science team 13.0%
11 Team using multiple ad hoc development environments such as Python/R/Java etc. 12.7%
12 Limitations of tools 12.0%
13 Need to coordinate with IT 11.8%
14 Maintaining responsible expectations about the potential impact of Data Science projects 11.5%
15 Inability to integrate findings into organization’s decision-making process 9.8%
16 Lack of funds to buy useful datasets from external sources 9.6%
17 Difficulties in deployment/scoring 8.6%
18 Scaling Data Science solution up to full database 8.4%
19 Limitations in the state of the art in machine learning 7.7%
20 Did not instrument data useful for scientific analysis and decision-making 6.5%
21 I prefer not to say 4.8%
22 Other 2.9%

The table above is a transcription of a transcription, so it would be remarkable if no Data Quality issues had crept in, however let’s assume that the figures are robust enough for our purposes. Of course the people surveyed will have reported multiple issues, so the percentages above are not additive. Nevertheless there are some very obvious comments to be made (some of the above items are pertinent to more than one of the points I would like to make):

  • Data Quality / Availability remain major issues(1, 5, and 8)

    It is indeed true that Machine Learning can be quite good at dealing with some types or bad or missing data. But no technology or approach is going to be able to paper over all of the cracks if your data is essentially incomplete and of poor quality. This point (together with some others below) speaks to the need to not approach Data Science on a stand-alone basis, but as part of a more holistic approach to data matters [4].
     

  • The Human angle and a focus on Culture are imperative(3, 6, 7, 14 and 15)

    Findings are one thing; using these to take action is quite another. At the end of the day, most ventures are successful or fail because of people; the people conducting the venture, the people receiving its intended benefits and so on. Ignore this dimension of data work (or any type of work) at your peril [5].
     

  • Business Questions amd Business Involvement matter(4, 6, 9 and 15)

    While in some circumstances the data can indeed “speak for itself”, it makes a lot more sense for Data Scientists to partner with business colleagues to both get direction and to help ensure that their findings lead to action [6].
     

  • Tools & Technology typically Trumped(11, 12 and 18)

    These first appear outside of the Top 10 (and 11 is a bit dubious to include here – it relates more to a proliferation of tools than to issues with any of them). I would never say that tools and technology are unimportant, but they are typically much less important than other considerations [7].

The overriding point is of course that – much as I noted out recently in Convergent Evolution – there is little new under the Sun. A survey of Business Intelligence / Data Warehousing professionals back in 2010 would have generated something very like the list above. A survey of EIS [8] professionals back in 2000 would have done the same.

The important things to do – regardless of the technologies and approaches employed – are to:

  1. Understand what questions are key to the running of an organisation [9]
     
  2. Determine what data is available to support decisions in these key areas
     
  3. Ensure that the data is in a “good enough” state, appropriately consolidated / made consistent, augmented / corrected by any useful external data and made available to the right people in a timely manner
     
  4. Focus on the human aspects of acting on what data is telling us and how to use data outputs to drive positive actions

Here too, little is new under the Sun. I have been referring to essentially these same four pillars of good practice since the mid 2000s. Some of our technological advances since then have been amazing. The prospect of leveraging the power of both Data Science and Artificial Intelligence in a business context is very exciting. But to truly succeed with these newer approaches, it helps to recall the eternal verities that have always underpinned good data-centric work [10]. The survey above makes this point crystal clear.

A final corollary to this observation is something I covered in A truth universally acknowledged…. The replies to the Kaggle survey highlight the fact that, much like the conductor of an orchestra does not need to be able to play the violin to a virtuoso level, people leading Data Science teams (and broader Data Functions) need a set of rounded skills, ones honed to address the types of issues appearing in the exhibits above. The skill-set that makes for an excellent Data Scientist does not necessarily help so much with many of the less technical issues that will determine the success or failure of Data Science teams.
 


 
Notes

 
[1]
 
Other Yogi-isms included, “Always go to other people’s funerals; otherwise they won’t go to yours”, “You can observe a lot by watching” and “If you can’t imitate him, don’t copy him”.
 
[2]
 
A Data Visualisation challenge to include that much text I realise. I think I might have been tempted to come up with pithier categories to aid legibility.
 
[3]
 
Ignoring “I prefer not to say” and “Other”.
 
[4]
 
As laid out in my many articles about the importance of Cultural Transformation.
 
[5]
 
See: Building Momentum – How to begin becoming a Data-driven Organisation.
 
[6]
 
I make precisely this point in my recent interview for Venturi Voice (starting just after 31:38).
 
[7]
 
I make this point most forcibly back in: A bad workman blames his [Business Intelligence] tools. The technology may be different, but the learnings are just as relevant today.
 
[8]
 
Executive Information Systems for those of tender years.
 
[9]
 
Machine learning techniques can clearly help here, but only if in concert with dialogue with people actually on the front-line and leading business areas.
 
[10]
 
In your search for such eternal verities, you could do much worse than starting with: 20 Risks that Beset Data Programmes.

 


From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases

 

Convergent Evolution

Ichthyosaur and Dolphin

No this article has not escaped from my Maths & Science section, it is actually about data matters. But first of all, channeling Jennifer Aniston [1], “here comes the Science bit – concentrate”.


 
Shared Shapes

The Theory of Common Descent holds that any two organisms, extant or extinct, will have a common ancestor if you roll the clock back far enough. For example, each of fish, amphibians, reptiles and mammals had a common ancestor over 500 million years ago. As shown below, the current organism which is most like this common ancestor is the Lancelet [2].

Chordate Common Ancestor

To bring things closer to home, each of the Great Apes (Orangutans, Gorillas, Chimpanzees, Bonobos and Humans) had a common ancestor around 13 million years ago.

Great Apes Common Ancestor

So far so simple. As one would expect, animals sharing a recent common ancestor would share many attributes with both it and each other.

Convergent Evolution refers to something else. It describes where two organisms independently evolve very similar attributes that were not features of their most recent common ancestor. Thus these features are not inherited, instead evolutionary pressure has led to the same attributes developing twice. An example is probably simpler to understand.

The image at the start of this article is of an Ichthyosaur (top) and Dolphin. It is striking how similar their body shapes are. They also share other characteristics such as live birth of young, tail first. The last Ichthyosaur died around 66 million years ago alongside many other archosaurs, notably the Dinosaurs [3]. Dolphins are happily still with us, but the first toothed whale (not a Dolphin, but probably an ancestor of them) appeared around 30 million years ago. The ancestors of the modern Bottlenose Dolphins appeared a mere 5 million years ago. Thus there is tremendous gap of time between the last Ichthyosaur and the proto-Dolphins. Ichthyosaurs are reptiles, they were covered in small scales [4]. Dolphins are mammals and covered in skin not massively different to our own. The most recent common ancestor of Ichthyosaurs and Dolphins probably lived around quarter of a billion years ago and looked like neither of them. So the shape and other attributes of Ichthyosaurs do not come from a common ancestor, they have developed independently (and millions of years apart) as adaptations to similar lifestyles as marine hunters. This is the essence of Convergent Evolution.

That was the Science, here comes the Technology…


 
A Brief Hydrology of Data Lakes

From 2000 to 2015, I had some success [5] with designing and implementing Data Warehouse architectures much like the following:

Data Warehouse Architecture (click to view larger version in a new window)

As a lot of my work then was in Insurance or related fields, the Analytical Repositories tended to be Actuarial Databases and / or Exposure Management Databases, developed in collaboration with such teams. Even back then, these were used for activities such as Analytics, Dashboards, Statistical Modelling, Data Mining and Advanced Visualisation.

Overlapping with the above, from around 2012, I began to get involved in also designing and implementing Big Data Architectures; initially for narrow purposes and later Data Lakes spanning entire enterprises. Of course some architectures featured both paradigms as well.

One of the early promises of a Data Lake approach was that – once all relevant data had been ingested – this would be directly leveraged by Data Scientists to derive insight.

Over time, it became clear that it would be useful to also have some merged / conformed and cleansed data structures in the Data Lake. Once the output of Data Science began to be used to support business decisions, a need arose to consider how it could be audited and both data privacy and information security considerations also came to the fore.

Next, rather than just being the province of Data Scientists, there were moves to use Data Lakes to support general Data Discovery and even business Reporting and Analytics as well. This required additional investments in metadata.

The types of issues with Data Lake adoption that I highlighted in Draining the Swamp earlier this year also led to the advent of techniques such as Data Curation [6]. In parallel, concerns about expensive Data Science resource spending 80% of their time in Data Wrangling [7] led to the creation of a new role, that of Data Engineer. These people take on much of the heavy lifting of consolidating, fixing and enriching datasets, allowing the Data Scientists to focus on Statistical Analysis, Data Mining and Machine Learning.

Big Data Architecture (click to view larger version in a new window)

All of which leads to a modified Big Data / Data Lake architecture, embodying people and processes as well as technology and looking something like the exhibit above.

This is where the observant reader will see the concept of Convergent Evolution playing out in the data arena as well as the Natural World.


 
In Closing

Convergent Evolution of Data Architectures

Lest it be thought that I am saying that Data Warehouses belong to a bygone era, it is probably worth noting that the archosaurs, Ichthyosaurs included, dominated the Earth for orders of magnitude longer that the mammals and were only dethroned by an asymmetric external shock, not any flaw their own finely honed characteristics.

Also, to be crystal clear, much as while there are similarities between Ichthyosaurs and Dolphins there are also clear differences, the same applies to Data Warehouse and Data Lake architectures. When you get into the details, differences between Data Lakes and Data Warehouses do emerge; there are capabilities that each has that are not features of the other. What is undoubtedly true however is that the same procedural and operational considerations that played a part in making some Warehouses seem unwieldy and unresponsive are also beginning to have the same impact on Data Lakes.

If you are in the business of turning raw data into actionable information, then there are inevitably considerations that will apply to any technological solution. The key lesson is that shape of your architecture is going to be pretty similar, regardless of the technical underpinnings.


 
Notes

 
[1]
 
The two of us are constantly mistaken for one another.
 
[2]
 
To be clear the common ancestor was not a Lancelet, rather Lancelets sit on the branch closest to this common ancestor.
 
[3]
 
Ichthyosaurs are not Dinosaurs, but a different branch of ancient reptiles.
 
[4]
 
This is actually a matter of debate in paleontological circles, but recent evidence suggests small scales.
 
[5]
 
See:

 
[6]
 
A term that is unaccountably missing from The Data & Analytics Dictionary – something to add to the next release. UPDATE: Now remedied here.
 
[7]
 
Ditto. UPDATE: Now remedied here

 


From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases

 

Fact-based Decision-making

All we need is fact-based decision-making ma'am
This article is about facts. Facts are sometimes less solid than we would like to think; sometimes they are downright malleable. To illustrate, consider the fact that in 98 episodes of Dragnet, Sergeant Joe Friday never uttered the words “Just the facts Ma’am”, though he did often employ the variant alluded to in the image above [1]. Equally, Rick never said “Play it again Sam” in Casablanca [2] and St. Paul never suggested that “money is the root of all evil” [3]. As Michael Caine never said in any film, “not a lot of people know that” [4].

 
Up-front Acknowledgements

These normally appear at the end of an article, but it seemed to make sense to start with them in this case:

Recently I published Building Momentum – How to begin becoming a Data-driven Organisation. In response to this, one of my associates, Olaf Penne, asked me about my thoughts on fact-base decision-making. This piece was prompted by both Olaf’s question and a recent article by my friend Neil Raden on his Silicon Angle blog, Performance management: Can you really manage what you measure? Thanks to both Olaf and Neil for the inspiration.

Fact-based decision making. It sounds good doesn’t it? Especially if you consider the alternatives: going on gut feel, doing what you did last time, guessing, not taking a decision at all. However – as is often the case with issues I deal with on this blog – fact-based decision-making is easier to say than it is to achieve. Here I will look to cover some of the obstacles and suggest a potential way to navigate round them. Let’s start however with some definitions.

Fact NOUN A thing that is known or proved to be true.
(Oxford Dictionaries)
Decision NOUN A conclusion or resolution reached after consideration.
(Oxford Dictionaries)

So one can infer that fact-based decision-making is the process of reaching a conclusion based on consideration of things that are known to be true. Again, it sounds great doesn’t it? It seems that all you have to do is to find things that are true. How hard can that be? Well actually quite hard as it happens. Let’s cover what can go wrong (note: this section is not intended to be exhaustive, links are provided to more in-depth articles where appropriate):


 
Accuracy of Data that is captured

Data Accuracy

A number of factors can play into the accuracy of data capture. Some systems (even in 2018) can still make it harder to capture good data than to ram in bad. Often an issue may also be a lack of master data definitions, so that similar data is labelled differently in different systems.

A more pernicious problem is combinatorial data accuracy, two data items are both valid, but not in combination with each other. However, often the biggest stumbling block is a human one, getting people to buy in to the idea that the care and attention they pay to data capture will pay dividends later in the process.

These and other areas are covered in greater detail in an older article, Using BI to drive improvements in data quality.
 
 
Honesty of Data that is captured

Honesty of Data

Data may be perfectly valid, but still not represent reality. Here I’ll let Neil Raden point out the central issue in his customary style:

People find the most ingenious ways to distort measurement systems to generate the numbers that are desired, not only NOT providing the desired behaviors, but often becoming more dysfunctional through the effort.

[…] voluntary compliance to the [US] tax code encourages a national obsession with “loopholes”, and what salesman hasn’t “sandbagged” a few deals for next quarter after she has met her quota for the current one?

Where there is a reward to be gained or a punishment to be avoided, by hitting certain numbers in a certain way, the creativeness of humans often comes to the fore. It is hard to account for such tweaking in measurement systems.
 
 
Timing issues with Data

Timing Issues

Timing is often problematic. For example, a transaction completed near the end of a period gets recorded in the next period instead, one early in a new period goes into the prior period, which is still open. There is also (as referenced by Neil in his comments above) the delayed booking of transactions in order to – with the nicest possible description – smooth revenues. It is not just hypothetical salespeople who do this of course. Entire organisations can make smoothing adjustments to their figures before publishing and deferral or expedition of obligations and earnings has become something of an art form in accounting circles. While no doubt most of this tweaking is done with the best intentions, it can compromise the fact-based approach that we are aiming for.
 
 
Reliability with which Data is moved around and consolidated

Data Transcription

In our modern architectures, replete with web-services, APIs, cloud-based components and the quasi-instantaneous transmission of new transactions, it is perhaps not surprising that occasionally some data gets lost in translation [5] along the way. That is before data starts to be Sqooped up into Data Lakes, or other such Data Repositories, and then otherwise manipulated in order to derive insight or provide regular information. All of these are processes which can introduce their own errors. Suffice it to say that transmission, collation and manipulation of data can all reduce its accuracy.

Again see Using BI to drive improvements in data quality for further details.
 
 
Pertinence and fidelity of metrics developed from Data

Data Metric

Here we get past issues with data itself (or how it is handled and moved around) and instead consider how it is used. Metrics are seldom reliant on just one data element, but are often rather combinations. The different elements might come in because a given metric is arithmetical in nature, e.g.

\text{Metric X} = \dfrac{\text{Data Item A}+\text{Data Item B}}{\text{Data Item C}}

Choices are made as to how to construct such compound metrics and how to relate them to actual business outcomes. For example:

\text{New Biz Growth} = \dfrac{(\text{Sales CYTD}-\text{Repeat CYTD})-(\text{Sales PYTD}-\text{Repeat PYTD})}{(\text{Sales PYTD}-\text{Repeat PYTD})}

Is this a good way to define New Business Growth? Are there any weaknesses in this definition, for example is it sensitive to any glitches in – say – the tagging of Repeat Business? Do we need to take account of pricing changes between Repeat Business this year and last year? Is New Business Growth something that is even worth tracking; what will we do as a result of understanding this?

The above is a somewhat simple metric, in a section of Using historical data to justify BI investments – Part I, I cover some actual Insurance industry metrics that build on each other and are a little more convoluted. The same article also considers how to – amongst other things – match revenue and outgoings when the latter are spread over time. There are often compromises to be made in defining metrics. Some of these are based on the data available. Some relate to inherent issues with what is being measured. In other cases, a metric may be a best approximation to some indication of business health; a proxy used because that indication is not directly measurable itself. In the last case, staff turnover may be a proxy for staff morale, but it does not directly measure how employees are feeling (a competitor might be poaching otherwise happy staff for example).
 
 
Robustness of extrapolations made from Data

By the third trimester, there will be hundreds of babies inside you...

© Randall Munroe, xkcd.com

I have used the above image before in these pages [6]. The situation it describes may seem farcical, but it is actually not too far away from some extrapolations I have seen in a business context. For example, a prediction of full-year sales may consist of this year’s figures for the first three quarters supplemented by prior year sales for the final quarter. While our metric may be better than nothing, there are some potential distortions related to such an approach:

  1. Repeat business may have fallen into Q4 last year, but was processed in Q3 this year. This shift in timing would lead to such business being double-counted in our year end estimate.
     
  2. Taking point 1 to one side, sales may be growing or contracting compared to the previous year. Using Q4 prior year as is would not reflect this.
     
  3. It is entirely feasible that some market event occurs this year ( for example the entrance or exit of a competitor, or the launch of a new competitor product) which would render prior year figures a poor guide.

Of course all of the above can be adjusted for, but such adjustments would be reliant on human judgement, making any projections similarly reliant on people’s opinions (which as Neil points out may be influenced, conciously or unconsciously, by self-interest). Where sales are based on conversions of prospects, the quantum of prospects might be a more useful predictor of Q4 sales. However here a historical conversion rate would need to be calculated (or conversion probabilities allocated by the salespeople involved) and we are back into essentially the same issues as catalogued above.

I explore some similar themes in a section of Data Visualisation – A Scientific Treatment
 
 
Integrity of statistical estimates based on Data

Statistical Data

Having spent 18 years working in various parts of the Insurance industry, statistical estimates being part of the standard set of metrics is pretty familiar to me [7]. However such estimates appear in a number of industries, sometimes explicitly, sometimes implicitly. A clear parallel would be credit risk in Retail Banking, but something as simple as an estimate of potentially delinquent debtors is an inherently statistical figure (albeit one that may not depend on the output of a statistical model).

The thing with statistical estimates is that they are never a single figure but a range. A model may for example spit out a figure like £12.4 million ± £0.5 million. Let’s unpack this.

Example distribution

Well the output of the model will probably be something analogous to the above image. Here a distribution has been fitted to the business event being modelled. The central point of this (the one most likely to occur according to the model) is £12.4 million. The model is not saying that £12.4 million is the answer, it is saying it is the central point of a range of potential figures. We typically next select a symmetrical range above and below the central figure such that we cover a high proportion of the possible outcomes for the figure being modelled; 95% of them is typical [8]. In the above example, the range extends plus £0. 5 million above £12.4 million and £0.5 million below it (hence the ± sign).

Of course the problem is then that Financial Reports (or indeed most Management Reports) are not set up to cope with plus or minus figures, so typically one of £12.4 million (the central prediction) or £11.9 million (the most conservative estimate [9]) is used. The fact that the number itself is uncertain can get lost along the way. By the time that people who need to take decisions based on such information are in the loop, the inherent uncertainty of the prediction may have disappeared. This can be problematic. Suppose a real result of £12.4 million sees an organisation breaking even, but one of £11.9 million sees a small loss being recorded. This could have quite an influence on what course of action managers adopt [10]; are they relaxed, or concerned?

Beyond the above, it is not exactly unheard of for statistical models to have glitches, sometimes quite big glitches [11].

This segment could easily expand into a series of articles itself. Hopefully I have covered enough to highlight that there may be some challenges in this area.
 
 
And so what?

The dashboard has been updated, how thrilling...

Even if we somehow avoid all of the above pitfalls, there remains one booby-trap that is likely to snare us, absent the necessary diligence. This was alluded to in the section about the definition of metrics:

Is New Business Growth something that is even worth tracking; what will we do as a result of understanding this?

Unless a reported figure, or output of a model, leads to action being taken, it is essentially useless. Facts that never lead to anyone doing anything are like lists learnt by rote at school and regurgitated on demand parrot-fashion; they demonstrate the mechanism of memory, but not that of understanding. As Neil puts it in his article:

[…] technology is never a solution to social problems, and interactions between human beings are inherently social. This is why performance management is a very complex discipline, not just the implementation of dashboard or scorecard technology.


 
How to Measure the Unmeasurable

Measuring the Unmeasurable

Our dream of fact-based decision-making seems to be crumbling to dust. Regular facts are subject to data quality issues, or manipulation by creative humans. As data is moved from system to system and repository to repository, the facts can sometimes acquire an “alt-” prefix. Timing issues and the design of metrics can also erode accuracy. Then there are many perils and pitfalls associated with simple extrapolation and less simple statistical models. Finally, any fact that manages to emerge from this gantlet [12] unscathed may then be totally ignored by those whose actions it is meant to guide. What can be done?

As happens elsewhere on this site, let me turn to another field for inspiration. Not for the first time, let’s consider what Science can teach us about dealing with such issues with facts. In a recent article [13] in my Maths & Science section, I examined the nature of Scientific Theory and – in particular – explored the imprecision inherent in the Scientific Method. Here is some of what I wrote:

It is part of the nature of scientific theories that (unlike their Mathematical namesakes) they are not “true” and indeed do not seek to be “true”. They are models that seek to describe reality, but which often fall short of this aim in certain circumstances. General Relativity matches observed facts to a greater degree than Newtonian Gravity, but this does not mean that General Relativity is “true”, there may be some other, more refined, theory that explains everything that General Relativity does, but which goes on to explain things that it does not. This new theory may match reality in cases where General Relativity does not. This is the essence of the Scientific Method, never satisfied, always seeking to expand or improve existing thought.

I think that the Scientific Method that has served humanity so well over the centuries is applicable to our business dilemma. In the same way that a Scientific Theory is never “true”, but instead useful for explaining observations and predicting the unobserved, business metrics should be judged less on their veracity (though it would be nice if they bore some relation to reality) and instead on how often they lead to the right action being taken and the wrong action being avoided. This is an argument for metrics to be simple to understand and tied to how decision-makers actually think, rather than some other more abstruse and theoretical definition.

A proxy metric is fine, so long as it yields the right result (and the right behaviour) more often than not. A metric with dubious data quality is still useful if it points in the right direction; if the compass needle is no more than a few degrees out. While of course steps that improve the accuracy of metrics are valuable and should be undertaken where cost-effective, at least equal attention should be paid to ensuring that – when the metric has been accessed and digested – something happens as a result. This latter goal is a long way from the arcana of data lineage and metric definition, it is instead the province of human psychology; something that the accomploished data professional should be adept at influencing.

I have touched on how to positively modify human behaviour in these pages a number of times before [14]. It is a subject that I will be coming back to again in coming months, so please watch this space.
 


Further reading on this subject:


 
Notes

 
[1]
 
According to Snopes, the phrase arose from a spoof of the series.
 
[2]
 
The two pertinent exchanges were instead:

Ilsa: Play it once, Sam. For old times’ sake.
Sam: I don’t know what you mean, Miss Ilsa.
Ilsa: Play it, Sam. Play “As Time Goes By”
Sam: Oh, I can’t remember it, Miss Ilsa. I’m a little rusty on it.
Ilsa: I’ll hum it for you. Da-dy-da-dy-da-dum, da-dy-da-dee-da-dum…
Ilsa: Sing it, Sam.

and

Rick: You know what I want to hear.
Sam: No, I don’t.
Rick: You played it for her, you can play it for me!
Sam: Well, I don’t think I can remember…
Rick: If she can stand it, I can! Play it!
 
[3]
 
Though he, or whoever may have written the first epistle to Timothy, might have condemned the “love of money”.
 
[4]
 
The origin of this was a Peter Sellers interview in which he impersonated Caine.
 
[5]
 
One of my Top Ten films.
 
[6]
 
Especially for all Business Analytics professionals out there (2009).
 
[7]
 
See in particular my trilogy:

  1. Using historical data to justify BI investments – Part I (2011)
  2. Using historical data to justify BI investments – Part II (2011)
  3. Using historical data to justify BI investments – Part III (2011)
 
[8]
 
Without getting into too many details, what you are typically doing is stating that there is a less than 5% chance that the measurements forming model input match the distribution due to a fluke; but this is not meant to be a primer on null hypotheses.
 
[9]
 
Of course, depending on context, £12.9 million could instead be the most conservative estimate.
 
[10]
 
This happens a lot in election polling. Candidate A may be estimated to be 3 points ahead of Candidate B, but with an error margin of 5 points, it should be no real surprise when Candidate B wins the ballot.
 
[11]
 
Try googling Nobel Laureates Myron Scholes and Robert Merton and then look for references to Long-term Capital Management.
 
[12]
 
Yes I meant “gantlet” that is the word in the original phrase, not “gauntlet” and so connections with gloves are wide of the mark.
 
[13]
 
Finches, Feathers and Apples (2018).
 
[14]
 
For example:

 


From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases

 

Building Momentum – How to begin becoming a Data-driven Organisation

Building Momentum - Becoming a Data Driven Organisation

Larger, annotated PDF version (opens in a new tab)

Introduction

It is hard to find an organisation that does not aspire to being data-driven these days. While there is undoubtedly an element of me-tooism about some of these statements (or a fear of competitors / new entrants who may use their data better, gaining a competitive advantage), often there is a clear case for the better leverage of data assets. This may be to do with the stand-alone benefits of such an approach (enhanced understanding of customers, competitors, products / services etc. [1]), or as a keystone supporting a broader digital transformation.

However, in my experience, many organisations have much less mature ideas about how to achieve their data goals than they do about setting them. Given the lack of executive experience in data matters [2], it is not atypical that one of the large strategy consultants is engaged to shape a data strategy; one of the large management consultants is engaged to turn this into something executable and maybe to select some suitable technologies; and one of the large systems integrators (or increasingly off-shore organisations migrating up the food chain) is engaged to do the work, which by this stage normally relates to building technology capabilities, implementing a new architecture or some other technology-focussed programme.

Juggling Third Parties

Even if each of these partners does a great job – which one would hope they do at their price points – a few things invariably get lost along the way. These include:

  1. A data strategy that is closely coupled to the organisation’s actual needs rather than something more general.

    While there are undoubtedly benefits in adopting best practice for an industry, there is also something to be said for a more tailored approach, tied to business imperatives and which may have the possibility to define the new best practice. In some areas of business, it makes sense to take the tried and tested approach, to be a part of the herd. In others – and data is in my opinion one of these – taking a more innovative and distinctive path is more likely to lead to success.
     

  2. Connective tissue between strategy and execution.

    The distinctions between the three types of organisations I cite above are becoming more blurry (not least as each seeks to develop new revenue streams). This can lead to the strategy consultants developing plans, which get ripped up by the management consultants; the management consultants revisiting the initial strategy; the systems integrators / off-shorers replanning, or opening up technical and architecture discussions again. Of course this means the client paying at least twice for this type of work. What also disappears is the type of accountability that comes when the same people are responsible for developing a strategy, turning this into a practical plan and then executing this [3].
     

  3. Focus on the cultural aspects of becoming more data-driven.

    This is both one of the most important factors that determines success or failure [4] and something that – frankly because it is not easy to do – often falls by the wayside. By the time that the third external firm has been on-boarded, the name of the game is generally building something (e.g. a Data Lake, or an analytics platform) rather than the more human questions of who will use this, in what way, to achieve which business objectives.

Of course a way to address the above is to allocate some experienced people (internal or external, ideally probably a blend) who stay the course from development of data strategy through fleshing this out to execution and who – importantly – can also take a lead role in driving the necessary cultural change. It also makes sense to think about engaging organisations who are small enough to tailor their approach to your needs and who will not force a “cookie cutter” approach. I have written extensively about how – with the benefit of such people on board – to run such a data transformation programme [5]. Here I am going to focus on just one phase of such a programme and often the most important one; getting going and building momentum.


 
A Third Way

There are a couple of schools of thought here:

  1. Focus on laying solid data foundations and thus build data capabilities that are robust and will stand the test of time.
     
  2. Focus on delivering something ASAP in the data arena, which will build the case for further investment.

There are points in favour of both approaches and criticisms that can be made of each as well. For example, while the first approach will be necessary at some point (and indeed at a relatively early one) in order to sustain a transformation to a data-driven organisation, it obviously takes time and effort. Exclusive focus on this area can use up money, political capital and try the patience of sponsors. Few business initiatives will be funded for years if they do not begin to have at least some return relatively soon. This remains the case even if the benefits down the line are potentially great.

Equally, the second approach can seem very productive at first, but will generally end up trying to make a silk purse out of a sow’s ear [6]. Inevitably, without improvements to the underlying data landscape, limitations in the type of useful analytics that be carried out will be reached; sometimes sooner that might be thought. While I don’t generally refer to religious topics on this blog [7], the Parable of the Sower is apposite here. Focussing on delivering analytics without attending to the broader data landscape is indeed like the seed that fell on stony ground. The practice yields results that spring up, only to wilt when the sun gets hot, given that they have no real roots [8].

So what to do? Well, there is a Third Way. This involves blending both approaches. I tend to think of this in the following way:

Proportion of Point and Strategic Data Activities over Time

First of all, this is a cartoon, it is not intended to indicate actual percentages, just to illustrate a general trend. In real life, it is likely that you will cycle round multiple times and indeed have different parallel work-streams at different stages. The general points I am trying to convey with this diagram are:

  1. At the beginning of a data transformation programme, there should probably be more emphasis on interim delivery and tactical changes. However, imoportantly, there is never zero strategic work. As things progress, the emphasis should swing more to strategic, long-term work. But again, even in a mature programme, there is never zero tactical work. There can also of course be several iterations of such shifts in approach.
     
  2. Interim and tactical steps should relate to not just analytics, but also to making point fixes to the data landscape where possible. It is also important to kick off diagnostic work, which will establish how bad things are and also suggest areas which could be attacked sooner rather than later; this too can initially be done on a tactical basis and then made more robust later. In general, if you consider the span of strategic data work, it makes sense to kick off cut-down (and maybe drastically cut-down) versions of many activities early on.
     
  3. Importantly, the tactical and strategic work-streams should not be hermetically sealed. What you actually want is healthy interplay. Building some early, “quick and dirty” analytics may highlight areas that should be covered by a data audit, or where there are obvious weaknesses in a data architecture. Any data assets that are built on a more strategic basis should also be leveraged by tactical work, improving its utility and probably increasing its lifespan.

 
Interconnected Activities

At the beginning of this article, I present a diagram (repeated below) which covers three types of initial data activities, the sort of work that – if executed competently – can begin to generate momentum for a data programme. The exhibit also references Data Strategy.

Building Momentum - Becoming a Data Driven Organisation

Larger, annotated PDF version (opens in a new tab)

Let’s look at each of these four things in some more detail:

  1. Analytic Point Solutions

    Where data has historically been locked up in either hard-to-use repositories or in source systems themselves, liberating even a bit of it can be very helpful. This does not have to be with snazzy tools (unless you want to showcase the art of the possible). An anecdote might help to explain.

    At one organisation, they had existing reporting that was actually not horrendous, but it was hard to access, hard to parameterise and hard to do follow-on analysis on. I took it upon myself to run 30 plus reports on a weekly and monthly basis, download the contents to Excel, front these with some basic graphs and make these all available on an intranet. This meant that people from Country A or Department B could go straight to their figures rather than having to run fiddly reports. It also meant that they had an immediate visual overview – including some comparisons to prior periods and trends over time (which were not available in the original reports). Importantly, they also got a basic pivot table, which they could use to further examine what was going on. These simple steps (if a bit laborious for me) had a massive impact. I later replaced the Excel with pages I wrote in a new web-reporting tool we built in house. Ultimately, my team moved these to our strategic Analytics platform.

    This shows how point solutions can be very valuable and also morph into more strategic facilities over time.
     

  2. Data Process Improvements

    Data issues may be to do with a range of problems from poor validation in systems, to bad data integration, but immature data processes and insufficient education for data entry staff are often key conributors to overall problems. Identifying such issues and quantifying their impact should be the province of a Data Audit, which is something I would recommend considering early on in a data programme. Once more this can be basic at first, considering just superficial issues, and then expand over time.

    While fixing some data process problems and making a stepped change in data quality will both probably take time an effort, it may be possible to identify and target some narrower areas in which progress can be made quite quickly. It may be that one key attribute necessary for analysis is poorly entered and validated. Some good communications around this problem can help, better guidance for people entering it is also useful and some “quick and dirty” reporting highlighting problems and – hopefully – tracking improvement can make a difference quicker than you might expect [9].
     

  3. Data Architecture Enhancements

    Improving a Data Architecture sounds like a multi-year task and indeed it can often be just that. However, it may be that there are some areas where judicious application of limited resource and funds can make a difference early on. A team engaged in a data programme should seek out such opportunities and expect to devote time and attention to them in parallel with other work. Architectural improvements would be best coordinated with data process improvements where feasible.

    An example might be providing a web-based tool to look up valid codes for entry into a system. Of course it would be a lot better to embed this functionality in the system itself, but it may take many months to include this in a change schedule whereas the tool could be made available quickly. I have had some success with extending such a tool to allow users to build their own hierarchies, which can then be reflected in either point analytics solutions or more strategic offerings. It may be possible to later offer the tool’s functionality via web-services allowing it to be integrated into more than one system.
     

  4. Data Strategy

    I have written extensively about Data Strategy on this site [10]. What I wanted to cover here is the interplay between Data Strategy and some of the other areas I have just covered. It might be thought that Data Strategy is both carved on tablets of stone [11] and stands in splendid and theoretical isolation, but this should not ever be the case. The development of a Data Strategy should of course be informed by a situational analysis and a vision of “what good looks like” for an organisation. However, both of these things can be shaped by early tactical work. Taking cues from initial tactical work should lead to a more pragmatic strategy, more aligned to business realities.

    Work in each of the three areas itemised above can play an important role in shaping a Data Strategy and – as the Data Strategy matures – it can obviously guide interim work as well. This should be an iterative process with lots of feedback.


 
Closing Thoughts

I have captured the essence of these thoughts in the diagram above. The important things to take away are that in order to generate momentum, you need to start to do some stuff; to extend the physical metaphor, you have to start pushing. However, momentum is a vector quantity (it has a direction as well as a magnitude [12]) and building momentum is not a lot of use unless it is in the general direction in which you want to move; so push with some care and judgement. It is also useful to realise that – so long as your broad direction is OK – you can make refinements to your direction as you pick up speed.

The above thoughts are based on my experience in a range of organisations and I am confident that they can be applied anywhere, making allowance for local cultures of course. Once momentum is established, it still needs to be maintained (or indeed increased), but I find that getting the ball moving in the first place often presents the greatest challenge. My hope is that the framework I present here can help data practitioners to get over this initial hurdle and begin to really make a difference in their organisations.
 


Further reading on this subject:


 
Notes

 
[1]
 
Way back in 2009, I wrote about the benefits of leveraging data to provide enhanced information. The article in question was tited Measuring the benefits of Business Intelligence. Everything I mention remains valid today in 2018.
 
[2]
 
See also:

 
[3]
 
If I many be allowed to blow my own trumpet for a moment, I have developed data / information strategies for eight organisations, turned seven of these into a costed / planned programme and executed at least the first few phases of six of these. I have always found being a consistent presence through these phases has been beneficial to the organisations I was helping, as well as helping to reduce duplication of work.
 
[4]
 
See my, now rather venerable, trilogy about cultural change in data / information programmes:

  1. Marketing Change
  2. Education and cultural transformation and
  3. Sustaining Cultural Change

together with the rather more recent:

  1. 20 Risks that Beset Data Programmes and
  2. Ever tried? Ever failed?
 
[5]
 
See for example:

  1. Draining the Swamp
  2. Bumps in the Road and
  3. Ideas for avoiding Big Data failures and for dealing with them if they happen
 
[6]
 
Dictionary.com offers a nice explanation of this phrase..
 
[7]
 
I was raised a Catholic, but have been areligious for many years.
 
[8]
 
Much like x^2+x+1=0.

For anyone interested, the two roots of this polynomial are clearly:

-\dfrac{1}{2}+\dfrac{\sqrt{3}}{2}\hspace{1mm}i\hspace{5mm}\text{and}\hspace{5mm}-\dfrac{1}{2}-\dfrac{\sqrt{3}}{2}\hspace{1mm}i

neither of which is Real.

 
[9]
 
See my rather venerable article, Using BI to drive improvements in data quality, for a fuller treatment of this area.
 
[10]
 
For starters see:

  1. Forming an Information Strategy: Part I – General Strategy
  2. Forming an Information Strategy: Part II – Situational Analysis
  3. Forming an Information Strategy: Part III – Completing the Strategy

and also the Data Strategy segment of The Anatomy of a Data Function – Part I.

 
[11]
 
Tablet of Stone
 
[12]
 
See Glimpses of Symmetry, Chapter 15 – It’s Space Jim….

 


From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases

 

The Anatomy of a Data Function – Part III

Part I Part II Part III

Sepia's Anatomy

This is the third and final part of my review of the anatomy of a Data Function, Part I may be viewed here and Part II here.

Update:

The data arena is a fluid one. The original set of Anatomy of a Data Function articles dates back to November 2017. As of August 2018, the data function schematic has been updated to separate out Artificial Intelligence from Data Science and to change the latter to Data Science / Engineering. No doubt further changes will be made from time to time.

In the first article, I introduced the following Data Function organogram:

The Anatomy of a Data Function

Larger PDF version (opens in a new tab)

and went on to cover each of Data Strategy, Analytics & Insight and Data Operations & Technology. In Part II, I discussed the two remaining Data Function areas of Data Architecture and Data Management. In this final article, I wanted to cover the Related Areas that appear on the right of the above diagram. This naturally segues into talking about the practicalities of establishing a Data Function and highlighting some problems to be avoided or managed.

As in Parts I and II, unless otherwise stated, text indented as a quotation is excerpted from the Data and Analytics Dictionary.
 
 
Related Areas

Related Areas

I have outlined some of the key areas with which the Data Function will work. This is not intended to be a comprehensive list and indeed the boxes may be different in different organisations. Regardless of the departments that appear here, the general approach will however be similar. I won’t go through each function in great detail here. There are some obvious points to make however. The first is an overall one that clearly a collaborative approach is mandatory. While there are undeniably some police-like attributes of any Data Function, it would be best if these were carried out by friendly community policemen or women, not paramilitaries.

So rather more:

Community Police

and rather less:

Not quite so Community Police
 
Data Privacy and Information Security

Though strongly related, these areas do not generally fall under the Data Function. Indeed some legislation requires that they are separate functions. Data Privacy and Information Security are related, but also distinct from each other. Definitions are as follows:

[Data Privacy] pertains to data held by organisations about individuals (customers, counterparties etc.) and specifically to data that can be used to identify people (personally identifiable data), or is sensitive in nature, such as medical records, financial transactions and so on. There is a legal obligation to safeguard such information and many regulations around how it can be used and how long it can be retained. Often the storage and use of such data requires explicit consent from the person involved.

Data and Analytics Dictionary entry: Data Privacy

Information Security consists of the steps that are necessary to make sure that any data or information, particularly sensitive information (trade secrets, financial information, intellectual property, employee details, customer and supplier details and so on), is protected from unauthorised access or use. Threats to be guarded against would include everything from intentional industrial espionage, to ad hoc hacking, to employees releasing or selling company information. The practice of Information Security also applies to the (nowadays typical) situation where some elements of internal information is made available via the internet. There is a need here to ensure that only those people who are authenticated to access such information can do so.

Data and Analytics Dictionary entry: Information Security

 
Digital

Digital is not a box that would have necessarily have appeared on this chart 15, or even 10, years ago. However, nowadays this is often an important (and large) department in many organisations. Digital departments leverage data heavily; both what they gather themselves and and data drawn from other parts of the organisation. This can be to show customers their transactions, to guide next best actions, or to suggest potentially useful products or services. Given this, collaboration with the Data Function should be particularly strong.
 
Change Management

There are some specific points to make with respect to Change collaboration. One dimension of this was covered in Part II. Looking at things the other way round, as well as being a regular department, with what are laughingly referred to as “business as usual” responsibilities [1], the Data Function will also drive a number of projects and programmes. Depending on how this is approached in an organisation, this means either that the Data Function will need its own Project Managers etc., or to have such allocated from Change. This means that interactions with Change are bidirectional, which may be particularly challenging.

For some reason, Change departments have often ended up holding the purse strings for all projects and programmes (perhaps a less than ideal outcome), so a Data Function looking to get its own work done may run counter to this (see also the second section of this article).
 
IT

While the role of IT is perhaps narrower nowadays than historically [2], they are deeply involved in the world of data and the infrastructure that supports its movement around the organisation. This means that the Data Function needs to pay particular attention to its relationship with IT.
 
Embedded Analytics Teams

A wholly centralised approach to delivering Analytics is neither feasible, nor desirable. I generally recommend hybrid arrangements with a strong centralised group and affiliated analytical resource embedded in business teams. In some organisations such people may be part of the Data Function, or have a dotted line into it. In others the connection may be less formal. Whatever the arrangements, the best result would be if embedded analytical staff viewed themselves as part of a broader analytical and data community, which can share tips, work to standards and leverage each other’s work.
 
Data Stewards

Data Stewards are a concept that arises from a requirement to embed Data Governance policies and processes. Data Function Governance staff and Data Architects both need to work closely with Data Stewards. A definition is as follows:

This is a concept that arises out of Data Governance. It recognises that accountability for things like data quality, metadata and the implementation of data policies needs to be devolved to business departments and often locations. A Data Steward is the person within a particular part of an organisation who is responsible for ensuring that their data is fit for purpose and that their area adheres to data policies and guidelines.

Data and Analytics Dictionary entry: Data Steward

  
End User Computing

There are several good reasons for engaging with this area. First, the various EUCs that have been developed will embody some element (unsatisfied elsewhere) of requirements for the processing and or distribution of data; these needs probably need to be met. Second, EUCs can present significant risks to organisations (as well as delivering significant benefits) and ameliorating these (while hopefully retaining the benefits) should be on the list of any Data Function. Third, the people who have built EUCs tend to be knowledgeable about an organisation’s data, the sort of people who can be useful sources of information and also potential allies.

[End User Computing] is a term used to cover systems developed by people other than an organisation’s IT department or an approved commercial software vendor. It may be that such software is developed and maintained by a small group of people within a department, but more typically a single person will have created and cares for the code. EUCs may be written in mainstream languages such as Java, C++ or Python, but are frequently instead Excel- or Access-based, leveraging their shared macro/scripting language, VBA (for Visual Basic for Applications). While related to Microsoft Visual Basic (the precursor to .NET), VBA is not a stand-alone language and can only run within a Microsoft Office application, such as Excel.

Data and Analytics Dictionary entry: End User Computing (EUC)

 
Third Party Providers

Often such organisations may be contracted through the IT function; however the Data Function may also hire its own consultants / service providers. In either case, the Data Function will need to pay similar attention to external groups as it does to internal service providers.
 
 
Building a Data Function for the Practical Man [3]

Flag Planting for the Practical Man

When I published Part I of this trilogy, many people were kind enough to say that they found reading it helpful. However, some of the same people went on to ask for some practical advice on how to go about setting up such a Data Function and – in particular – how to navigate the inevitable political hurdles. While I don’t believe in recipes for success that are guaranteed to work in all circumstances, the second section of this article will cover three selected high-level themes that I think are helpful to bear in mind at the start of a Data Function journey. Here I am assuming that you are the leader of the nascent Data Function and it is your accountability to build the team while adding demonstrable business value [4].

Starting Small

It is a truth universally acknowledged, that a Leader newly in possession of a Data Function, must be in want of some staff [5]. However seldom will such a person be furnished with a budget and headcount commensurate with the task at hand; at least in the early days. Often instead, the mission, should you choose to accept it, is to begin to make a difference in the Data World with a skeleton crew at best [6]. Well no one can work miracles and so it is a question of judgement where to apply scarce resource.

My view is that this is best applied in shining a light on the existing data landscape, but in two ways. First, at the Analytics end of the spectrum, looking to unearth novel findings from an organisation’s data; the sort of task you give to a capable Data Scientist with some background in the industry sector they are operating in. Second, at the Governance end of the spectrum, documenting failures in existing data processing and reporting; in particular any that could expose the organisation to specific and tangible risks. In B2C organisations, an obvious place to look is in customer data. In B2B ones instead you can look at transactions with counterparties, or in the preparation of data for external reports, either Financial or Regulatory. Here the ideal person is a competent Data Analyst with some knowledge of the existing data landscape, in particular the compromises that have to be made to work with it.

In both cases, the objective is to tell the organisation things it does not know. Positively, a glimmer of what nuggets its data holds and the impact this could have. Negatively, examples of where a poor data landscape leads to legal, regulatory, or reputational risks.

These activities can add value early on and increase demand for more of this type of work. The first investigation can lead to the creation of a Data Science team, the second to the establishment of regular Data Audits and people to run these.

A corollary here is a point that I ceaselessly make, data exploitation and data control are two sides of the same coin. By making progress in areas that are at least superficially at antipodal locations within a Data Function, the connective tissue between them becomes more apparent.

BAU or Project?

There is a pernicious opinion held by an awful lot of people which goes as follows.

  1. We have issues with our data, its quality, completeness and fitness for purpose.
  2. We do not do a good enough job of leveraging our data to guide decision making.
  3. Therefore we need a data project / programme to sort this out once and for all.
  4. Where is the telephone number of the Change Director?

Well there is some logic to the above and setting up a data project (more likely programme) is a helpful thing to do. However, this is necessary, but not sufficient [7]. Let’s think of a comparison?

  1. We need to ensure that our Financial and Management accounts are sound.
  2. It would be helpful if business leaders had good Financial reports to help them understand the state of their business.
  3. Therefore we need a Finance project / programme to sort this out once and for all.
  4. Where is the telephone number of the Change Director?

Most CFOs would view the above as their responsibility. They have an entire function focussed on such matters. Of course they may want to run some Finance projects and Change will help with this, but a Finance Department is an ongoing necessity.

To pick another example one that illustrates just how quickly the make-up of organisations can change, just replace the word “Finance” with “Risk” in the above and “CFO” with “CRO”. While programmes may be helpful to improve either Risk or Finance, they do not run the Risk or Finance functions, the designated officers do and they have a complement of staff to assist them. It is exactly the same with data. Data programmes will enhance your use of data or control of it, but they will not ensure the day-to-day management and leverage of data in your organisation. Running “data” is the responsibility of the designated officer [8] and they should have a complement of staff to assist them as well.

The Data Function is a “business as usual” [9] function. Conveying this fact to a range of stakeholders is going to be one of the first challenges. It may be that the couple of examples I cite above can provide some ammunition for this task.

Demolishing Demoralising Demarcations

With Data Functions and their leaders both being relative emergent phenomena [10], the separation of duties between them and other areas of a business that also deal with data can be less than clear. Scanning down the Related Areas column of the overall Data Function chart, three entities stand out who may feel that they have a strong role to play in data matters: Digital, Change Management and IT.

Of course each is correct and collaboration is the best way forward. However, human nature is not always do benign and I have several times seen jockeying for position between Data, Digital, Change and IT. Route A to resolving this is of course having clarity as to everyone’s roles and a lead Executive (normally a CEO or COO) who ensures that people play nicely with each other. Back in the real world, it will be down to the leaders in each of these areas to forge some sort of consensus about who does what and why. It is probably best to realise this upfront, rather than wasting time and effort lobbying Executives to rule on things they probably have no intention of ruling on.

Nascent Data Function leaders should be aware that there will be a tendency for other teams to carve out what might be seen as the sexier elements of Data work; this can almost seem logical when – for example – a Digital team already has a full complement of web analytics staff; surely it is just a matter of pointing these at other internal data sets, right?

If we assume that the Data Function is the last of the above mentioned departments to form, then “zero sum game” thinking would dictate that whatever is accretive to the Data Function is deleterious to existing data staff in other departments. Perhaps a good place to start in combatting this mind-set is to first acknowledge it and second to take steps to allay people’s fears. It may well make sense for some staff to gravitate to the Data Function, but only if there is a compelling logic and only if all parties agree. Offering the leaders of other departments joint decision-making on such sensitive issues can be a good confidence-building step.

Setting out explicitly to help colleagues in other departments, where feasible to do so, can make very good sense and begin the necessary work of building bridges. As with most areas of human endeavour, forging good relationships and working towards the common good are both the right thing to do and put the Data Function leader in a good place as and when more contentious discussions arise.

To make this concrete, when people in another function appear to be stepping on the toes of the Data Function, instead of reacting with outrage, it may be preferable to embrace and fully understand the work that is being done. It may even make sense to support such work, even if the ultimate view is to do things a bit differently. Insisting on organisational purity and a “my way, or the highway” attitude to data matters are both steps towards a failed Data Function. Instead, engage, listen, support and – maybe over time – seek to nudge things towards your desired state.
 
 
Closing Thoughts

That's All Folks

So we have reached the end of our anatomical journey. While maybe the information contained in these three articles would pale into insignificance compared to an actual course in human anatomy, we have nevertheless covered five main work-areas within a Data Function, splitting these down into nineteen sub-areas and cataloguing eight functions with which collaboration will be key in driving success. I have also typed over 8,000 words to convey my ideas. For those who have read all of them, thank you for your perseverance; I hope that the effort has been worthwhile and that you found some of my opinions thought-provoking.

I would also like to thank the various people who have provided positive feedback on this series via LinkedIn and Facebook. Your comments were particularly influential in shaping this final chapter.

So what are the main takeaways? Well first the word collaboration has cropped up a lot and – because data is so pervasive in organisations – the need to collaborate with a wide variety of people and departments is strong. Second, extending the human anatomy analogy, while each human shares a certain basic layout (upright, bipedal, two arms, etc.), there is considerable variation within the basic parameters. The same goes for the organogram of a Data Function that I have presented at the beginning of each of these articles. The boxes may be rearranged in some organisations, some may not sit in the Data Function in others, the amount of people allocated to each work-area will vary enormously. As with human anatomy, grasping the overall shape is more important than focussing on the inevitable variations between different people.

Third, a central concept is of course that a Data Function is necessary, not just a series of data-centric projects. Even if it starts small, some dedicated resource will be necessary and it would probably be foolish to embark on a data journey without at least a skeleton crew. Fourth, in such straitened circumstances, it is important to point early and clearly to the value of data, both in reducing potentially expensive risks and in driving insights that can save money, boost market share or improve products or services. If the budget is limited, attend to these two things first.

A fifth and final thought is how little these three articles have focussed on technology. Hadoop clusters, data visualisation suites and data governance tools all have their place, but the success or failure of data-centric work tends to pivot on more human and process considerations. This theme of technology being the least important part of data work is one I have come back to time and time again over the nine years that this blog has been published. This observation remains as true today as back in 2008.
 

Part I Part II Part III

 
Notes

 
[1]
 
BAU should in general be filed along with other mythical creatures such as Unicorns, Bigfoot, The Kraken and The Loch Ness Monster.
 
[2]
 
Not least because of the rise of Data Functions, Digital Teams and stand-alone Change Organisations.
 
[3]
 
A title borrowed from J E Thompson’s Calculus for the Practical Man; a tome read by the young Richard Feynman in childhood. Today “Calculus for the Practical Person” might be a more inclusive title.
 
[4]
 
Also known as “pulling yourself up by your bootstraps”.
 
[5]
 
I seem to be channelling JA a lot at present – see A truth universally acknowledged….
 
[6]
 
Indeed I have stated on this particular journey with just myself for company on no fewer than for occasions (these three 1, 2, 3, plus at Bupa).
 
[7]
 
Once a Mathematician, always a Mathematician.
 
[8]
 
See Alphabet Soup for some ideas about what he or she might be called.
 
[9]
 
See note 1.
 
[10]
 
Despite early high-profile CDOs beginning to appear at the turn of the millennium – Joe Bugajski was appointed VP and Chief Data Officer at Visa International in 2001 (Wikipedia).

 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

The Anatomy of a Data Function – Part II

Part I Part II Part III

Sepia's Anatomy

This is the second part of my review of the anatomy of a Data Function, the artfully named Part I may be viewed here. As seems to happen all too often to me, this series will now extend to having a Part III, which may be viewed here.

Update:

The data arena is a fluid one. The original set of Anatomy of a Data Function articles dates back to November 2017. As of August 2018, the data function schematic has been updated to separate out Artificial Intelligence from Data Science and to change the latter to Data Science / Engineering. No doubt further changes will be made from time to time.

In the first article, I introduced the following Data Function organogram:

The Anatomy of a Data Function

Larger PDF version (opens in a new tab)

and went on to cover each of Data Strategy, Analytics & Insight and Data Operations & Technology. In Part II, I will consider the two remaining Data Function areas of Data Architecture and Data Management. Covering Related Areas, and presenting some thoughts on how to go about setting up a Data Function and the pitfalls to be faced along the way, together form the third and final part of this trilogy.

As in Part I, unless otherwise stated, text indented as a quotation is excerpted from the Data and Analytics Dictionary.
 
 
Data Architecture

Data Architecture

To be somewhat self-referential, this area acts a a cornerstone for the rest of the Data Function. While sometimes non-Data architects can seem to inhabit a loftier plane than most mere mortals, Data Architects (who definitively must be part of the Data Function and none of the Business, Enterprise or Solutions Architecture groups) tend to be more practical sorts with actual hands-on technical skills. Perhaps instead of the title “Architect”, “Structural Engineer” would be more appropriate. When a Data Architect draws a diagram with connected boxes, he or she generally understands how the connections work and could probably take a fair stab at implementing the linkages themselves. The other denizens of this area, such as Data Business Analysts, are also essentially pragmatic people, focused on real business outcomes. Data Architecture is a non-theoretical discipline and here I present some of the real-world activities that its members are often engaged in.
 
Change Portfolio Engagement

One of the most important services that a good Data Function can perform is to act as a moderator for the otherwise deleterious impact that uncontrolled (and uncoordinated) Change portfolios can have on even the best of data landscapes [1]. As I mention in another article:

Over the last decade or so, the delivery of technological change has evolved to the point where many streams of parallel work are run independently of each other with each receiving very close management scrutiny in order to ensure delivery on-time and on-budget. It should be recognised that some of this shift in modus operandi has been as a result of IT departments running projects that have spiralled out of control, or where delivery has been significantly delayed or compromised. The gimlet-like focus of Change on delivery “come Hell or High-water” represents the pendulum swinging to the other extreme.

What this shift in approach means in practice is that – as is often the case – when things go wrong or take longer than anticipated, areas of work are de-scoped to secure delivery dates. In my experience, 9 times out of 10 one of the things that gets thrown out is data-related work; be that not bothering to develop reporting on top of new systems, not integrating new data into existing repositories, not complying with data standards, or not implementing master data management.

As well as the danger of skipping necessary data related work, if some data-related work is actually undertaken, then corners may be cut to meet deadlines and budgets. It is not atypical for instance that a Change Programme, while adding their new capabilities to interfaces or ETL, compromises or overwrites existing functionality. This can mean that data-centric code is in a worse state after a Change Programme than before. My roadworks anecdote begins to feel all too apt a metaphor to employ.

Looking more broadly at Change Programmes, even without the curse of de-scopes, their focus is seldom data and the expertise of Change staff is not often in data matters. Because of this, such work can indeed seem to be analogous to continually digging up the same stretch of road for different purposes, combined with patching things up again in a manner that can sometimes be barely adequate. Extending our metaphor, the result of Change that is not controlled from a data point of view can be a landscape with lumps, bumps and pot-holes. Maybe the sewer was re-laid on time and to budget, but the road has been trashed in the process. Perhaps a new system was shoe-horned in to production, but rendered elements of an Analytical Repository useless in the process.

Excerpted from: Bumps in the Road

A primary responsibility of a properly constituted Data Function is to lean hard against the prevailing winds of Change in order to protect existing data capabilities that would otherwise likely be blown away [2]. Given the gargantuan size of most current Change teams, it makes sense to have at least a reasonable amount of Data Function resource applied to this area. Hopefully early interventions in projects and programmes can mitigate any potentially adverse impacts and perhaps even lead to Change being accretive to data landscapes, as it really ought to be.

The best approach, as with most human endeavours is a collaborative one, with Data Function staff (probably Data Architects) getting involved in new Change projects and programmes at an early stage and shaping them to be positive from a Data dimension. However, there also needs to be teeth in the process; on occasion the Data Function must be able to prevent work that would cause true damage from going ahead; hopefully powers that are used more in breach than observance.
 
Data Modelling

It is in this area that the practical bent of Data Architects and Data Business Analysts is seen very clearly. Data modelling mirrors the realities of systems and databases the way that Theoretical Physicists use Mathematics to model the Natural World [3]. In both cases, while there may be a degree of abstraction, the end purpose is to achieve something more concrete. A definition is as follows:

[Data Modelling is] the process of examining data sets (e.g. the database underpinning a system) in order to understand how they are structured, the relationships between their various parts and the business entities and transactions they represent. While system data will have a specific Physical Data Model (the tables it contains and their linkages), Data Modelling may instead look to create a higher-level and more abstract set of pseudo-tables, which would be easier to relate to for non-technical staff and would more closely map to business terms and activities; this is known as a Conceptual Data Model. Sitting somewhere between the two may be found Logical Data Models. There are several specific documents produced by such work, one of the most common being an Entity-Relationship diagram, e.g. a sales order has a customer and one or more line items, each of which has a product.

Data and Analytics Dictionary entry: Data Modelling

 
Data Business Analysis

Another critical role. In my long experience of both setting up Data Functions and running Data Programmes, having good Data Business Analysts on board is often the difference between success and failure. I cannot stress enough how important this role is.

Data Business Analysts are neither regular Business Analysts, nor just Data Analysts, but rather a combination of the best of both. They do have all the requirement gathering skills of the best BAs, but complement these with Data Modelling abilities, always seeking to translate new requirements into expanded or refined Data Models. Also the way that they approach business requirements will be very specific. The optimal way to do this is by teasing out (and they collating and categorising) business questions and then determining the information needed to answer these. A good Data Business Analyst will also have strong Data Analysis skills, being able to work with unfamiliar and lightly-documented datasets to discern meaning and link this to business concepts. A definition is as follows:

A person who has extensive understanding of both business processes and the data necessary to support these. A Business Analyst is expert at discerning what people need to do. A Data Analyst is adept at working with datasets and extracting meaning from them. A Data Business Analyst can work equally happily in both worlds at the same time. When they talk to people about their requirements for information, they are simultaneously updating mental models of the data necessary to meet these needs. When they are considering how lightly-documented datasets hang together, they constantly have in mind the business purpose to which such resources may be bent.

Data and Analytics Dictionary entry: Data Business Analyst

 
 
Data Management

Data Management

Again, it is worth noting that I have probably defined this area more narrowly than many. It could be argued that it should encompass the work I have under Data Architecture and maybe much of what is under Data Operations & Technology. The actual hierarchy is likely to be driven by factors like the nature of the organisation and the seniority of Managers in the Data Function. For good or ill, I have focussed Data Management more on the care and feeding of Data Assets in my recommended set-up. A definition is as follows:

The day-to-day management of data within an organisation, which encompasses areas such as Data Architecture, Data Quality, Data Governance (normally on behalf of a Data Governance Committee) and often some elements of data provision and / or regular reporting. The objective is to appropriately manage the lifecycle of data throughout the entire organisation, which both ensures the reliability of data and enables it to become a valuable and strategic asset.

In some organisations, Data Management and Analytics are part of the same organisation, in others they are separate but work closely together to achieve shared objectives.

Data and Analytics Dictionary entry: Data Management

 
Data Governance

There is a clear link here with some of the Data Architecture activities, particularly the Change Portfolio Engagement work-area. Governance should represent the strategic management of the data component of Change (i.e. most of Change), day-to-day collaboration would sit more in the Data Architecture area.

The management processes and policies necessary to ensure that data captured or generated within a company is of an appropriate standard to use, represents actual business facts and has its integrity preserved when transferred to repositories (e.g. Data Lakes and / or Data Warehouses, General Ledgers etc.), especially when this transfer involves aggregation or merging of different data sets. The activities that Data Governance has oversight of include the operation of and changes to Systems of Record and the activities of Data Management and Analytics departments (which may be merged into one unit, or discrete but with close collaboration).

Data Governance has a strategic role, often involving senior management. Day-to-day tasks supporting Data Governance are often carried out by a Data Management team.

Data and Analytics Dictionary entry: Data Governance

 
Data Definitions & Metadata

This is a relatively straightforward area to conceptualise. Rigorous and consistent definitions of master data and calculated data are indispensable in all aspects of how a Data Function operates and how an organisation both leverages and protects its data. Focusing on Metadata, a definition would be as follows:

[Metadata is] data about data. So descriptions of what appears in fields, how these relate to other fields and what concepts bigger constructs like Tables embody. This helps people unfamiliar with a dataset to understand how it hangs together and is good practice in the same way that documentation of any other type of code is good practice. Metadata can be used to support some elements of Data Discovery by less technical people. It is also invaluable when there is a need for Data Migration.

Data and Analytics Dictionary entry: Metadata

 
Data Audit

One of the challenges in driving Data Quality improvements in organisations is actually highlighting the problems and their impacts. Often poor Data Quality is a hidden cost, spread across many people taking longer to do their jobs than is necessary, or specific instances where interactions with business counterparties (including customers) are compromised. Organisations obviously cope – at least in general – with these issues, but they are a drag on efficiency and, in extremis, can lead to incidents which can cause significant financial loss and/or reputational damage. A way to make such problems more explicit is via a regular Data Audit, a review of data in source systems and as it travels through various data repositories. This would include some assessment of the completeness and overall quality of data, highlighting areas of particular concern. So one component might include the percentage of active records which suffer from a significant data quality issue.

It is important that any such issues are categorised. Are they the result of less than perfect data entry procedures, which could be tightened up? Are they due to deficient validation in transactional systems, where this could be improved and there may be a role for Master Data Management? Are data interfaces between systems to blame, where these need to be reengineered or potentially replaced? Are there architectural issues with systems or repositories, which will require remedial work to address?

This information needs to be rolled up and presented in an accessible manner so that those responsible for systems and processes can understand where issues lie. Data Audits, even if partially automated, take time and effort, so it may be appropriate to carry them out quarterly. In this case, it is valuable to understand how the situation is changing over time and also to track the – hopefully positive – impact of any remedial action. Experienced Data Analysts with a good appreciation of how business is conducted in the organisation are the type of resource best suited to Data Audit work.
 
Data Quality

Much that needs to be said here is covered in the previous section about Data Audit. Data Quality can be defined as follows:

The characteristics of data that cover how accurately and completely it mirrors real world events and thereby how much reliance can be placed on it for the purpose of generating information and insight. Enhancing Data Quality should be a primary objective of Data Management teams.

Data and Analytics Dictionary entry: Data Quality

A Data Quality team, which would work closely with Data Audit colleagues, would be focussed on helping to drive improvements. The details of such work are covered in an earlier article, from which the following is excerpted:

There are a number of elements that combine to improve the quality of data:

  1. Improve how the data is entered
  2. Make sure your interfaces aren’t the problem
  3. Check how the data is entered / interfaced
  4. Don’t suppress bad data in your BI

As with any strategy, it is ideal to have the support of all four pillars. However, I have seen greater and quicker improvements through the fourth element than with any of the others.

Excerpted from: Using BI to drive improvements in data quality

 
Master Data Management

There is some overlap here with Data Definitions & Metadata as mentioned above. Master Data Management has also been mentioned here in the context of Data Quality initiatives. However this specialist area tends to demand dedicated staff. A definition is as follows:

Master Data Management is the term used to both describe the set of process by which Master Data is created, changed and deleted in an organisation and also the technological tools that can facilitate these processes. There is a strong relation here to Data Governance, an area which also encompasses broader objectives. The aim of MDM is to ensure that the creation of business transactions results in valid data, which can then be leveraged confidently to create Information.

Many of the difficulties in MDM arise from items of Master Data that can change over time; for example when one counterparty is acquired by another, or an organisational structure is changed (maybe creating new departments and consolidating old ones). The challenges here include, how to report historical transactions that are tagged with Master Data that has now changed.

Data and Analytics Dictionary entry: Master Data Management

 
 
At this point, we have covered all of the work-areas within our idealised Data Function. In the third and final piece, we will consider the right-hand column of Related Areas, ones that a Data Function must collaborate with. Having covered these, the trilogy will close by offering some thoughts on the challenges of setting up a Data Function and how these may be overcome.
 

Part I Part II Part III

 
Notes

 
[1]
 
I am old enough to recall a time before Change portfolios, I can recall no organisation in which I have worked over the last 20 years in which Change portfolios have had a positive impact on data assets; maybe I have just been unlucky, but it begins to feel more like a fundamental Physical Law.
 
[2]
 
I have clearly been writing about hurricanes too much recently!
 
[3]
 
As is seen, for example in, the Introduction to my [as yet unfinished] book on the role of Group Theory in Theoretical Physics, Glimpses of Symmetry.

 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

The Anatomy of a Data Function – Part I

Part I Part II Part III

Back in Alphabet Soup, I presented a diagram covering what I think are good and bad approaches to organising Analytics and Data Management. I wanted to offer an expanded view [1] of the good organisation chart and to talk a bit about each of its components. Originally, I planned to address these objectives across two articles. As happens to me all too frequently, the piece has now expanded to become three parts. The second may be read here, and the third here.

Update:

The data arena is a fluid one. The original set of Anatomy of a Data Function articles dates back to November 2017. As of August 2018, the data function schematic has been updated to separate out Artificial Intelligence from Data Science and to change the latter to Data Science / Engineering. No doubt further changes will be made from time to time.

Let’s leap right in and look at my suggested chart:

The Anatomy of a Data Function

Larger PDF version (opens in a new tab)

I appreciate that the above is a lot of boxes! I can feel Finance and HR staff reaching for their FTE calculators as I write. A few things to note:

  1. I have avoided the temptation to add the titles of executives, managers or team leaders. Alphabet Soup itself pointed out how tough it can be to wrestle with the nomenclature. Instead I have just focussed on areas of work.
     
  2. The term “work areas” is intentional. In larger organisations, there may be teams or individuals corresponding to each box. In smaller ones Data Function staff will wear many hats and several work areas may be covered by one person.
     
  3. In some places, a number of work areas that I have tagged as Data Function ones may be performed in other parts of the organisation, though it is to be hoped with collaboration and coordination.

Having dealt with these caveats, let’s provide some colour on each of these progressing from top to bottom and left to right. In this first article we will consider the Data Strategy, Analytics & Insight and Data Operations & Technology areas. The second part will cover the remaining elements of Data Architecture and Data Management. The final article, considers Related Areas before also covering some of the challenges that may be faced in setting up a Data Function.

In what follows, unless otherwise stated, text indented as a quotation is excerpted from the Data and Analytics Dictionary.
 
 
Data Strategy

Data Strategy

A clear strategy is obviously most important to establish in the early days of a Data Function. Indeed a Data Strategy may well call for the creation of a Data Function where none currently exists. For anyone interested in this process, I recommend my series of three articles on this subject [2]. However a Data Strategy is not something carved in stone, it will need to be revisited and adapted (maybe significantly) as circumstances change (e.g. after an acquisition, a change in market conditions or potentially due to the emergence of some new technology). There is thus a need for ongoing work in this area. However, as demand for strategic work will tend to be lumpy, I suggest amalgamating Data Strategy with the following two sub-areas.
 
Data Comms & Education

Elsewhere on this site, I have highlighted the need for effective communication, education and assiduous follow-up in data programmes [3]. Education on data matters does not stop when a data quality drive is successfully completed, or when a new set of analytical capabilities are introduced, this is a need for an ongoing commitment here. Activities falling into this work area include: publishing regular data newsletters and infographics, designing and helping to deliver training programmes, providing follow-up and support to aid the embedded used of new capabilities or to ingrain new behaviours.
 
Relationship Management

There is a need for all Data Function staff to establish and maintain good working relations with any colleagues they come into contact with, regardless of their level or influence. However, the nature of, generally hierarchical, organisations is that it is often prudent to pay special attention to more senior staff, or to the type of person (common in many companies) who may not be that senior, but whose opinion is influential. In aggregate these two groups of people are often described as stakeholders. Providing regular updates to stakeholders and ensuring both that they are comfortable with Data Function work and that this is aligned with their priorities can be invaluable [4]. Having senior, business-savvy Data Function people available to do this work is the most likely path to success.
 
 
Analytics & Insight

Analytics & Insight

Broadly speaking the Analytics area and its sub-areas are focussed more on one-off analyses rather that the recurrent production of information [5], the latter being more the preserve of the Data Operations & Technology area. There is also more of a statistical flavour to the work carried out here.

[Analytics relates to] deriving insights from data which are generally beyond the purpose for which the data was originally captured – to be contrasted with Information which relates to the meaning inherent in data (i.e. the reason that it was captured in the first place). Analytics often employ advanced statistical techniques (logistic regression, multivariate regression, time series analysis etc.) to derive meaning from data.

Data and Analytics Dictionary entry: Analytics

 
Data Science / Engineering

I have Data Science / Engineering as a sub-area of analytics, as with most terminology used in the data arena and most organisational units that exist in Data Functions, some people might argue that I have this the wrong way round and that Data Science / Engineering should be preeminent. Reconciling different points of view is not my objective here, I think most people will agree that both work areas should be covered. This comment pertains to many other parts of this article. Here is a two-part definition of the area (or rather the people who populate it):

[Data Scientists are people who are] au fait with exploiting data in many formats from Flat Files to Data Warehouses to Data Lakes. Such individuals possess equal abilities in the data technologies (such as Big Data) and how to derive benefit from these via statistical modelling. Data Scientists are often lapsed actual scientists.

[Data Engineering is] essentially a support function for Data Science. If you consider the messy process of sourcing data, loading it into a repository, cleansing, filling in “holes”, combining disparate data and so on, this is a somewhat different skill set to then analysing the resulting data. Early in the history of Data Science, the whole process sat with Data Scientists. Increasingly nowadays, the part before actual analysis begins is carried out by Data Engineers. These people often also concern themselves with aspects of Master Data Management and Data Architecture.

Data and Analytics Dictionary entries: Data Scientist and Data Engineering

 
Artificial Intelligence

Artificial Intelligence (AI) has its roots in the academic discipline of both trying to understand cognition and reproduce it. In recent years, AI has come out of the lecture hall and lab and become an increasingly important aspect of the modern world, from driving our cars, to providing advice on financial products, to making recommendations for on-line purchases. Previously I had AI subsumed in the Data Science section. However, two trends have caused me to split it out in the current version of the Data Function Anatomy. First the importance of AI is increasing rapidly. Second it is no longer just Data Scientists who carry out this work. The advent of AI Platforms hides some of the complexity, while allowing a broader range of people to leverage the power of AI. The particular element of AI that has caught on to the greatest extent in business to date is Machine Learning.

Data and Analytics Dictionary entry: Artificial Intelligence

 
Data Visualisation

There is an overlap here with both the Data Science team within the Analytics & Insight area and the Business Intelligence team in the Data Operations & Technology area. Many of the outputs of a good Data Function will include graphs, charts and other such exhibits. However, here would be located the real specialists, the people who would set standards for the presentation of visual data across the Data Function and be the most able in leveraging visualisation tools. A definition of Data Visualisation is as follows:

Techniques – such as graphs – for presenting complex information in a manner in which it can be more easily digested by human observers. Based on the concept that a picture paints a thousand words (or a dozen Excel sheets).

Data and Analytics Dictionary entry: Data Visualisation

 
Predictive Analytics

Gartner refer to four types of Analytics: descriptive, diagnostic, predictive and prescriptive analytics. In an article I referred to these as:

  • What happened?
  • Why did it happen?
  • What is going to happen next?
  • What should we be doing?

Data and Analytics Dictionary entry: Analytics

Predictive analytics is that element of the Analytics function that aims to predict the future, “What is going to happen next?” in the above list. This can be as simple as extrapolating data based on a trend line, or can involve more sophisticated techniques such as Time Series Analysis. As with most elements of the Data Function, there is overlap between Predictive Analytics and both Data Science and Business Intelligence.
 
“Skunkworks”

As with Data Strategy, state-of-the-art in Analytics & Insight will continue to evolve. This part of the Data Function will aim to keep current with the latest developments and to try out new techniques and new technologies that may later be adopted more widely by Data Function colleagues. The “skunkworks” team would be staffed by capable programmers / data scientists / statisticians.
 
 
Data Operations & Technology

Data Operations & Technology

It could be reasonably argued that this area is part of Data Management; I probably would not object too strongly to this suggestion. However, there are some benefits to considering it separately. This is the most IT-like of the areas considered here. It recognises that data technology (being it the Hadoop suite, Data Warehouse technology, or combinations of both) is different to many other forms of technology and needs its own specialists to focus on it. It is likely that the staff in this area will also collaborate closely with IT (see the final work area in Part II) or, in some cases, supervise work carried out by IT. As well as directly creating data capabilities, Data Operations & Technology staff would be active in the day-to-day running of these; again in collaboration with colleagues from both inside and outside of the Data Function.
 
Business Intelligence

There is no ISO definition, but I use this term as a catch-all to describe the transformation of raw data into information that can be disseminated to business people to support decision-making.

Data and Analytics Dictionary entry: Business Intelligence

This sub-area focusses on the relatively mature task of providing Business Intelligence solutions to organisations and working with IT to support and maintain these. Good BI tools work best on a sound underlying information architecture and so there would need to also be close collaboration with Data Infrastructure staff within Data Operations & Technology as well as colleagues from Data Architecture and also Analytics & Insight.
 
Regular Reporting

If BI provides interactive capabilities to support decision making, Regular Reporting is about the provision of specific key reports to relevant parties on a periodic basis; daily, weekly, monthly etc. These may be burst out to people’s e-mail accounts, provided at some central location, or both. While this an area that is ideally automated, there will still be significant need for human monitoring and to support the inevitable changes.
 
Data Service

One of the things that any part of a Data Function will find itself doing on a very regular basis is crafting ad hoc data extracts for other departments, e.g. Marketing, Risk & Compliance etc. Sometimes such a need will be on an ongoing basis and a web-service or some other Data Integration mechanism will need to be set up. Rather than having this be something that is supported out of the general running costs of the Data Function, it makes sense to have a specific unit whose role is to fulfil these needs. Even so, there may be a need for queuing and prioritisation of requests
 
Data Infrastructure

This relates to the physical architecture of the data landscape (for various flavours of logical architectures, see Data Architecture in Part II). While some of the tasks here may be carried out by (or in collaboration with) IT, the Data Infrastructure team will be expert at the care and feeding of Hadoop and related technologies and have experience in the fine-tuning of Data Warehouses and Data Marts.
 
SWAT Team

While (as both mentioned above and also covered in Part III this article) some of the heavy lifting in data matters will be carried out by an organisation’s IT team and / or its external partners, the process for getting things done in this way can be slow, tortuous and expensive [6]. It is important that a Data Function has its own capability to make at least minor technological changes, or to build and deploy helpful data facilities without having to engage with the overall bureaucracy. The SWAT Team will have a small number of very capable and business-knowledgeable programmers, capable of quickly generating robust and functional code.
 
 
The second part of this piece picks up where I have left off here and first consider Data Architecture.
 

Part I Part II Part III

 
Notes

 
[1]
 
I have added some functions that were absent in the previous one, mostly as they were not central to the points I was making in the previous article.
 
[2]
 
My trilogy on Formatting a Data / Information Strategy has the following parts:

  1. Part I – General Strategy
  2. Part II – Situational Analysis
  3. Part III – Completing the Strategy
 
[3]
 
While this theme runs through most of my writing, it is most explicitly referenced in the following three articles:

  1. Marketing Change
  2. Education and Cultural Transformation
  3. Sustaining Cultural Change
 
[4]
 
It should be noted that the relationship management described here is not the same as a Project Manager covering progress against plan. This is more of a two way conversation to ensure that the Data Function remains cognisant of stakeholder needs
 
[5]
 
Though of course sometimes one-off analyses have value on an ongoing basis and so need to be productionised. In such cases the Analytics & Insight team would work with the Data Operations & Technology team to achieve this.
 
[6]
 
No citation needed.

 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

The revised and expanded Data and Analytics Dictionary

The Data and Analytics Dictionary

Since its launch in August of this year, the peterjamesthomas.com Data and Analytics Dictionary has received a welcome amount of attention with various people on different social media platforms praising its usefulness, particularly as an introduction to the area. A number of people have made helpful suggestions for new entries or improvements to existing ones. I have also been rounding out the content with some more terms relating to each of Data Governance, Big Data and Data Warehousing. As a result, The Dictionary now has over 80 main entries (not including ones that simply refer the reader to another entry, such as Linear Regression, which redirects to Model).

The most recently added entries are as follows:

  1. Anomaly Detection
  2. Behavioural Analytics
  3. Complex Event Processing
  4. Data Discovery
  5. Data Ingestion
  6. Data Integration
  7. Data Migration
  8. Data Modelling
  9. Data Privacy
  10. Data Repository
  11. Data Virtualisation
  12. Deep Learning
  13. Flink
  14. Hive
  15. Information Security
  16. Metadata
  17. Multidimensional Approach
  18. Natural Language Processing (NLP)
  19. On-line Transaction Processing
  20. Operational Data Store (ODS)
  21. Pig
  22. Table
  23. Sentiment Analysis
  24. Text Analytics
  25. View

It is my intention to continue to revise this resource. Adding some more detail about Machine Learning and related areas is probably the next focus.

As ever, ideas for what to include next would be more than welcome (any suggestions used will also be acknowledged).
 


 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary