The Anatomy of a Data Function – Part II

Part I Part II Part III

Sepia's Anatomy

This is the second part of my review of the anatomy of a Data Function, the artfully named Part I may be viewed here. As seems to happen all too often to me, this series will now extend to having a Part III, which will be published in coming weeks.

In the first article, I introduced the following Data Function organogram:

The Anatomy of a Data Function

Larger PDF version (opens in a new tab)

and went on to cover each of Data Strategy, Analytics & Insight and Data Operations & Technology. In Part II, I will consider the two remaining Data Function areas of Data Architecture and Data Management. Covering Related Areas, and presenting some thoughts on how to go about setting up a Data Function and the pitfalls to be faced along the way, will together form the third and final part of this trilogy.

As in Part I, unless otherwise stated, text indented as a quotation is excerpted from the Data and Analytics Dictionary.
 
 
Data Architecture

Data Architecture

To be somewhat self-referential, this area acts a a cornerstone for the rest of the Data Function. While sometimes non-Data architects can seem to inhabit a loftier plane than most mere mortals, Data Architects (who definitively must be part of the Data Function and none of the Business, Enterprise or Solutions Architecture groups) tend to be more practical sorts with actual hands-on technical skills. Perhaps instead of the title “Architect”, “Structural Engineer” would be more appropriate. When a Data Architect draws a diagram with connected boxes, he or she generally understands how the connections work and could probably take a fair stab at implementing the linkages themselves. The other denizens of this area, such as Data Business Analysts, are also essentially pragmatic people, focused on real business outcomes. Data Architecture is a non-theoretical discipline and here I present some of the real-world activities that its members are often engaged in.

Change Portfolio Engagement

One of the most important services that a good Data Function can perform is to act as a moderator for the otherwise deleterious impact that uncontrolled (and uncoordinated) Change portfolios can have on even the best of data landscapes [1]. As I mention in another article:

Over the last decade or so, the delivery of technological change has evolved to the point where many streams of parallel work are run independently of each other with each receiving very close management scrutiny in order to ensure delivery on-time and on-budget. It should be recognised that some of this shift in modus operandi has been as a result of IT departments running projects that have spiralled out of control, or where delivery has been significantly delayed or compromised. The gimlet-like focus of Change on delivery “come Hell or High-water” represents the pendulum swinging to the other extreme.

What this shift in approach means in practice is that – as is often the case – when things go wrong or take longer than anticipated, areas of work are de-scoped to secure delivery dates. In my experience, 9 times out of 10 one of the things that gets thrown out is data-related work; be that not bothering to develop reporting on top of new systems, not integrating new data into existing repositories, not complying with data standards, or not implementing master data management.

As well as the danger of skipping necessary data related work, if some data-related work is actually undertaken, then corners may be cut to meet deadlines and budgets. It is not atypical for instance that a Change Programme, while adding their new capabilities to interfaces or ETL, compromises or overwrites existing functionality. This can mean that data-centric code is in a worse state after a Change Programme than before. My roadworks anecdote begins to feel all too apt a metaphor to employ.

Looking more broadly at Change Programmes, even without the curse of de-scopes, their focus is seldom data and the expertise of Change staff is not often in data matters. Because of this, such work can indeed seem to be analogous to continually digging up the same stretch of road for different purposes, combined with patching things up again in a manner that can sometimes be barely adequate. Extending our metaphor, the result of Change that is not controlled from a data point of view can be a landscape with lumps, bumps and pot-holes. Maybe the sewer was re-laid on time and to budget, but the road has been trashed in the process. Perhaps a new system was shoe-horned in to production, but rendered elements of an Analytical Repository useless in the process.

Excerpted from: Bumps in the Road

A primary responsibility of a properly constituted Data Function is to lean hard against the prevailing winds of Change in order to protect existing data capabilities that would otherwise likely be blown away [2]. Given the gargantuan size of most current Change teams, it makes sense to have at least a reasonable amount of Data Function resource applied to this area. Hopefully early interventions in projects and programmes can mitigate any potentially adverse impacts and perhaps even lead to Change being accretive to data landscapes, as it really ought to be.

The best approach, as with most human endeavours is a collaborative one, with Data Function staff (probably Data Architects) getting involved in new Change projects and programmes at an early stage and shaping them to be positive from a Data dimension. However, there also needs to be teeth in the process; on occasion the Data Function must be able to prevent work that would cause true damage from going ahead; hopefully powers that are used more in breach than observance.

Data Modelling

It is in this area that the practical bent of Data Architects and Data Business Analysts is seen very clearly. Data modelling mirrors the realities of systems and databases the way that Theoretical Physicists use Mathematics to model the Natural World [3]. In both cases, while there may be a degree of abstraction, the end purpose is to achieve something more concrete. A definition is as follows:

[Data Modelling is] the process of examining data sets (e.g. the database underpinning a system) in order to understand how they are structured, the relationships between their various parts and the business entities and transactions they represent. While system data will have a specific Physical Data Model (the tables it contains and their linkages), Data Modelling may instead look to create a higher-level and more abstract set of pseudo-tables, which would be easier to relate to for non-technical staff and would more closely map to business terms and activities; this is known as a Conceptual Data Model. Sitting somewhere between the two may be found Logical Data Models. There are several specific documents produced by such work, one of the most common being an Entity-Relationship diagram, e.g. a sales order has a customer and one or more line items, each of which has a product.

Data and Analytics Dictionary entry: Data Modelling

Data Business Analysis

Another critical role. In my long experience of both setting up Data Functions and running Data Programmes, having good Data Business Analysts on board is often the difference between success and failure. I cannot stress enough how important this role is.

Data Business Analysts are neither regular Business Analysts, nor just Data Analysts, but rather a combination of the best of both. They do have all the requirement gathering skills of the best BAs, but complement these with Data Modelling abilities, always seeking to translate new requirements into expanded or refined Data Models. Also the way that they approach business requirements will be very specific. The optimal way to do this is by teasing out (and they collating and categorising) business questions and then determining the information needed to answer these. A good Data Business Analyst will also have strong Data Analysis skills, being able to work with unfamiliar and lightly-documented datasets to discern meaning and link this to business concepts. A definition is as follows:

A person who has extensive understanding of both business processes and the data necessary to support these. A Business Analyst is expert at discerning what people need to do. A Data Analyst is adept at working with datasets and extracting meaning from them. A Data Business Analyst can work equally happily in both worlds at the same time. When they talk to people about their requirements for information, they are simultaneously updating mental models of the data necessary to meet these needs. When they are considering how lightly-documented datasets hang together, they constantly have in mind the business purpose to which such resources may be bent.

Data and Analytics Dictionary entry: Data Business Analyst

 
 
Data Management

Data Management

Again, it is worth noting that I have probably defined this area more narrowly than many. It could be argued that it should encompass the work I have under Data Architecture and maybe much of what is under Data Operations & Technology. The actual hierarchy is likely to be driven by factors like the nature of the organisation and the seniority of Managers in the Data Function. For good or ill, I have focussed Data Management more on the care and feeding of Data Assets in my recommended set-up. A definition is as follows:

The day-to-day management of data within an organisation, which encompasses areas such as Data Architecture, Data Quality, Data Governance (normally on behalf of a Data Governance Committee) and often some elements of data provision and / or regular reporting. The objective is to appropriately manage the lifecycle of data throughout the entire organisation, which both ensures the reliability of data and enables it to become a valuable and strategic asset.

In some organisations, Data Management and Analytics are part of the same organisation, in others they are separate but work closely together to achieve shared objectives.

Data and Analytics Dictionary entry: Data Management

Data Governance

There is a clear link here with some of the Data Architecture activities, particularly the Change Portfolio Engagement work-area. Governance should represent the strategic management of the data component of Change (i.e. most of Change), day-to-day collaboration would sit more in the Data Architecture area.

The management processes and policies necessary to ensure that data captured or generated within a company is of an appropriate standard to use, represents actual business facts and has its integrity preserved when transferred to repositories (e.g. Data Lakes and / or Data Warehouses, General Ledgers etc.), especially when this transfer involves aggregation or merging of different data sets. The activities that Data Governance has oversight of include the operation of and changes to Systems of Record and the activities of Data Management and Analytics departments (which may be merged into one unit, or discrete but with close collaboration).

Data Governance has a strategic role, often involving senior management. Day-to-day tasks supporting Data Governance are often carried out by a Data Management team.

Data and Analytics Dictionary entry: Data Governance

Data Definitions & Metadata

This is a relatively straightforward area to conceptualise. Rigorous and consistent definitions of master data and calculated data are indispensable in all aspects of how a Data Function operates and how an organisation both leverages and protects its data. Focusing on Metadata, a definition would be as follows:

[Metadata is] data about data. So descriptions of what appears in fields, how these relate to other fields and what concepts bigger constructs like Tables embody. This helps people unfamiliar with a dataset to understand how it hangs together and is good practice in the same way that documentation of any other type of code is good practice. Metadata can be used to support some elements of Data Discovery by less technical people. It is also invaluable when there is a need for Data Migration.

Data and Analytics Dictionary entry: Metadata

Data Audit

One of the challenges in driving Data Quality improvements in organisations is actually highlighting the problems and their impacts. Often poor Data Quality is a hidden cost, spread across many people taking longer to do their jobs than is necessary, or specific instances where interactions with business counterparties (including customers) are compromised. Organisations obviously cope – at least in general – with these issues, but they are a drag on efficiency and, in extremis, can lead to incidents which can cause significant financial loss and/or reputational damage. A way to make such problems more explicit is via a regular Data Audit, a review of data in source systems and as it travels through various data repositories. This would include some assessment of the completeness and overall quality of data, highlighting areas of particular concern. So one component might include the percentage of active records which suffer from a significant data quality issue.

It is important that any such issues are categorised. Are they the result of less than perfect data entry procedures, which could be tightened up? Are they due to deficient validation in transactional systems, where this could be improved and there may be a role for Master Data Management? Are data interfaces between systems to blame, where these need to be reengineered or potentially replaced? Are there architectural issues with systems or repositories, which will require remedial work to address?

This information needs to be rolled up and presented in an accessible manner so that those responsible for systems and processes can understand where issues lie. Data Audits, even if partially automated, take time and effort, so it may be appropriate to carry them out quarterly. In this case, it is valuable to understand how the situation is changing over time and also to track the – hopefully positive – impact of any remedial action. Experienced Data Analysts with a good appreciation of how business is conducted in the organisation are the type of resource best suited to Data Audit work.

Data Quality

Much that needs to be said here is covered in the previous section about Data Audit. Data Quality can be defined as follows:

The characteristics of data that cover how accurately and completely it mirrors real world events and thereby how much reliance can be placed on it for the purpose of generating information and insight. Enhancing Data Quality should be a primary objective of Data Management teams.

Data and Analytics Dictionary entry: Data Quality

A Data Quality team, which would work closely with Data Audit colleagues, would be focussed on helping to drive improvements. The details of such work are covered in an earlier article, from which the following is excerpted:

There are a number of elements that combine to improve the quality of data:

  1. Improve how the data is entered
  2. Make sure your interfaces aren’t the problem
  3. Check how the data is entered / interfaced
  4. Don’t suppress bad data in your BI

As with any strategy, it is ideal to have the support of all four pillars. However, I have seen greater and quicker improvements through the fourth element than with any of the others.

Excerpted from: Using BI to drive improvements in data quality

Master Data Management

There is some overlap here with Data Definitions & Metadata as mentioned above. Master Data Management has also been mentioned here in the context of Data Quality initiatives. However this specialist area tends to demand dedicated staff. A definition is as follows:

Master Data Management is the term used to both describe the set of process by which Master Data is created, changed and deleted in an organisation and also the technological tools that can facilitate these processes. There is a strong relation here to Data Governance, an area which also encompasses broader objectives. The aim of MDM is to ensure that the creation of business transactions results in valid data, which can then be leveraged confidently to create Information.

Many of the difficulties in MDM arise from items of Master Data that can change over time; for example when one counterparty is acquired by another, or an organisational structure is changed (maybe creating new departments and consolidating old ones). The challenges here include, how to report historical transactions that are tagged with Master Data that has now changed.

Data and Analytics Dictionary entry: Master Data Management

 
 
At this point, we have covered all of the work-areas within our idealised Data Function. In the third and final piece (which is yet to be published), we will consider the right-hand column of Related Areas, ones that a Data Function must collaborate with. Having covered these, the trilogy will close by offering some thoughts on the challenges of setting up a Data Function and how these may be overcome.
 

Part I Part II Part III

 
Notes

 
[1]
 
I am old enough to recall a time before Change portfolios, I can recall no organisation in which I have worked over the last 20 years in which Change portfolios have had a positive impact on data assets; maybe I have just been unlucky, but it begins to feel more like a fundamental Physical Law.
 
[2]
 
I have clearly been writing about hurricanes too much recently!
 
[3]
 
As is seen, for example in, the Introduction to my [as yet unfinished] book on the role of Group Theory in Theoretical Physics, Glimpses of Symmetry.

 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

The Anatomy of a Data Function – Part I

Part I Part II Part III

Back in Alphabet Soup, I presented a diagram covering what I think are good and bad approaches to organising Analytics and Data Management. I wanted to offer an expanded view [1] of the good organisation chart and to talk a bit about each of its components. Originally, I planned to address these objectives across two articles. As happens to me all too frequently, the piece has now expanded to become three parts. The second may be read here. The third is forthcoming.

Let’s leap right in and look at my suggested chart:

The Anatomy of a Data Function

Larger PDF version (opens in a new tab)

I appreciate that the above is a lot of boxes! I can feel Finance and HR staff reaching for their FTE calculators as I write. A few things to note:

  1. I have avoided the temptation to add the titles of executives, managers or team leaders. Alphabet Soup itself pointed out how tough it can be to wrestle with the nomenclature. Instead I have just focussed on areas of work.
     
  2. The term “work areas” is intentional. In larger organisations, there may be teams or individuals corresponding to each box. In smaller ones Data Function staff will wear many hats and several work areas may be covered by one person.
     
  3. In some places, a number of work areas that I have tagged as Data Function ones may be performed in other parts of the organisation, though it is to be hoped with collaboration and coordination.

Having dealt with these caveats, let’s provide some colour on each of these progressing from top to bottom and left to right. In this first article we will consider the Data Strategy, Analytics & Insight and Data Operations & Technology areas. The second part will cover the remaining elements of Data Architecture and Data Management. The final article, when published, will consider Related Areas before also covering some of the challenges that may be faced in setting up a Data Function.

In what follows, unless otherwise stated, text indented as a quotation is excerpted from the Data and Analytics Dictionary.
 
 
Data Strategy

Data Strategy

A clear strategy is obviously most important to establish in the early days of a Data Function. Indeed a Data Strategy may well call for the creation of a Data Function where none currently exists. For anyone interested in this process, I recommend my series of three articles on this subject [2]. However a Data Strategy is not something carved in stone, it will need to be revisited and adapted (maybe significantly) as circumstances change (e.g. after an acquisition, a change in market conditions or potentially due to the emergence of some new technology). There is thus a need for ongoing work in this area. However, as demand for strategic work will tend to be lumpy, I suggest amalgamating Data Strategy with the following two sub-areas.

Data Comms & Education

Elsewhere on this site, I have highlighted the need for effective communication, education and assiduous follow-up in data programmes [3]. Education on data matters does not stop when a data quality drive is successfully completed, or when a new set of analytical capabilities are introduced, this is a need for an ongoing commitment here. Activities falling into this work area include: publishing regular data newsletters and infographics, designing and helping to deliver training programmes, providing follow-up and support to aid the embedded used of new capabilities or to ingrain new behaviours.

Relationship Management

There is a need for all Data Function staff to establish and maintain good working relations with any colleagues they come into contact with, regardless of their level or influence. However, the nature of, generally hierarchical, organisations is that it is often prudent to pay special attention to more senior staff, or to the type of person (common in many companies) who may not be that senior, but whose opinion is influential. In aggregate these two groups of people are often described as stakeholders. Providing regular updates to stakeholders and ensuring both that they are comfortable with Data Function work and that this is aligned with their priorities can be invaluable [4]. Having senior, business-savvy Data Function people available to do this work is the most likely path to success.
 
 
Analytics & Insight

Analytics & Insight

Broadly speaking the Analytics area and its sub-areas are focussed more on one-off analyses rather that the recurrent production of information [5], the latter being more the preserve of the Data Operations & Technology area. There is also more of a statistical flavour to the work carried out here.

[Analytics relates to] deriving insights from data which are generally beyond the purpose for which the data was originally captured – to be contrasted with Information which relates to the meaning inherent in data (i.e. the reason that it was captured in the first place). Analytics often employ advanced statistical techniques (logistic regression, multivariate regression, time series analysis etc.) to derive meaning from data.

Data and Analytics Dictionary entry: Analytics

Data Science

I have Data Science as a sub-area of analytics, as with most terminology used in the data arena and most organisational units that exist in Data Functions, some people might argue that I have this the wrong way round and that Data Science should be preeminent. Reconciling different points of view is not my objective here, I think most people will agree that both work areas should be covered. This comment pertains to many other parts of this article. Here is a definition of the area (or rather the people who populate it):

[Data Scientists are people who are] au fait with exploiting data in many formats from Flat Files to Data Warehouses to Data Lakes. Such individuals possess equal abilities in the data technologies (such as Big Data) and how to derive benefit from these via statistical modelling. Data Scientists are often lapsed actual scientists.

Data and Analytics Dictionary entry: Data Scientist

Data Visualisation

There is an overlap here with both the Data Science team within the Analytics & Insight area and the Business Intelligence team in the Data Operations & Technology area. Many of the outputs of a good Data Function will include graphs, charts and other such exhibits. However, here would be located the real specialists, the people who would set standards for the presentation of visual data across the Data Function and be the most able in leveraging visualisation tools. A definition of Data Visualisation is as follows:

Techniques – such as graphs – for presenting complex information in a manner in which it can be more easily digested by human observers. Based on the concept that a picture paints a thousand words (or a dozen Excel sheets).

Data and Analytics Dictionary entry: Data Visualisation

Predictive Analytics

Gartner refer to four types of Analytics: descriptive, diagnostic, predictive and prescriptive analytics. In an article I referred to these as:

  • What happened?
  • Why did it happen?
  • What is going to happen next?
  • What should we be doing?

Data and Analytics Dictionary entry: Analytics

Predictive analytics is that element of the Analytics function that aims to predict the future, “What is going to happen next?” in the above list. This can be as simple as extrapolating data based on a trend line, or can involve more sophisticated techniques such as Time Series Analysis. As with most elements of the Data Function, there is overlap between Predictive Analytics and both Data Science and Business Intelligence.

“Skunkworks”

As with Data Strategy, state-of-the-art in Analytics & Insight will continue to evolve. This part of the Data Function will aim to keep current with the latest developments and to try out new techniques and new technologies that may later be adopted more widely by Data Function colleagues. The “skunkworks” team would be staffed by capable programmers / data scientists / statisticians.
 
 
Data Operations & Technology

Data Operations & Technology

It could be reasonably argued that this area is part of Data Management; I probably would not object too strongly to this suggestion. However, there are some benefits to considering it separately. This is the most IT-like of the areas considered here. It recognises that data technology (being it the Hadoop suite, Data Warehouse technology, or combinations of both) is different to many other forms of technology and needs its own specialists to focus on it. It is likely that the staff in this area will also collaborate closely with IT (see the final work area in Part II) or, in some cases, supervise work carried out by IT. As well as directly creating data capabilities, Data Operations & Technology staff would be active in the day-to-day running of these; again in collaboration with colleagues from both inside and outside of the Data Function.

Business Intelligence

There is no ISO definition, but I use this term as a catch-all to describe the transformation of raw data into information that can be disseminated to business people to support decision-making.

Data and Analytics Dictionary entry: Business Intelligence

This sub-area focusses on the relatively mature task of providing Business Intelligence solutions to organisations and working with IT to support and maintain these. Good BI tools work best on a sound underlying information architecture and so there would need to also be close collaboration with Data Infrastructure staff within Data Operations & Technology as well as colleagues from Data Architecture and also Analytics & Insight.

Regular Reporting

If BI provides interactive capabilities to support decision making, Regular Reporting is about the provision of specific key reports to relevant parties on a periodic basis; daily, weekly, monthly etc. These may be burst out to people’s e-mail accounts, provided at some central location, or both. While this an area that is ideally automated, there will still be significant need for human monitoring and to support the inevitable changes.

Data Service

One of the things that any part of a Data Function will find itself doing on a very regular basis is crafting ad hoc data extracts for other departments, e.g. Marketing, Risk & Compliance etc. Sometimes such a need will be on an ongoing basis and a web-service or some other Data Integration mechanism will need to be set up. Rather than having this be something that is supported out of the general running costs of the Data Function, it makes sense to have a specific unit whose role is to fulfil these needs. Even so, there may be a need for queuing and prioritisation of requests

Data Infrastructure

This relates to the physical architecture of the data landscape (for various flavours of logical architectures, see Data Architecture in Part II). While some of the tasks here may be carried out by (or in collaboration with) IT, the Data Infrastructure team will be expert at the care and feeding of Hadoop and related technologies and have experience in the fine-tuning of Data Warehouses and Data Marts.

SWAT Team

While (as both mentioned above and also covered in Part III this article) some of the heavy lifting in data matters will be carried out by an organisation’s IT team and / or its external partners, the process for getting things done in this way can be slow, tortuous and expensive [6]. It is important that a Data Function has its own capability to make at least minor technological changes, or to build and deploy helpful data facilities without having to engage with the overall bureaucracy. The SWAT Team will have a small number of very capable and business-knowledgeable programmers, capable of quickly generating robust and functional code.
 
 
The second part of this piece will picks up where I have left off here and first consider Data Architecture.
 

Part I Part II Part III

 
Notes

 
[1]
 
I have added some functions that were absent in the previous one, mostly as they were not central to the points I was making in the previous article.
 
[2]
 
My trilogy on Formatting a Data / Information Strategy has the following parts:

  1. Part I – General Strategy
  2. Part II – Situational Analysis
  3. Part III – Completing the Strategy
 
[3]
 
While this theme runs through most of my writing, it is most explicitly referenced in the following three articles:

  1. Marketing Change
  2. Education and Cultural Transformation
  3. Sustaining Cultural Change
 
[4]
 
It should be noted that the relationship management described here is not the same as a Project Manager covering progress against plan. This is more of a two way conversation to ensure that the Data Function remains cognisant of stakeholder needs
 
[5]
 
Though of course sometimes one-off analyses have value on an ongoing basis and so need to be productionised. In such cases the Analytics & Insight team would work with the Data Operations & Technology team to achieve this.
 
[6]
 
No citation needed.

 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

The revised and expanded Data and Analytics Dictionary

The Data and Analytics Dictionary

Since its launch in August of this year, the peterjamesthomas.com Data and Analytics Dictionary has received a welcome amount of attention with various people on different social media platforms praising its usefulness, particularly as an introduction to the area. A number of people have made helpful suggestions for new entries or improvements to existing ones. I have also been rounding out the content with some more terms relating to each of Data Governance, Big Data and Data Warehousing. As a result, The Dictionary now has over 80 main entries (not including ones that simply refer the reader to another entry, such as Linear Regression, which redirects to Model).

The most recently added entries are as follows:

  1. Anomaly Detection
  2. Behavioural Analytics
  3. Complex Event Processing
  4. Data Discovery
  5. Data Ingestion
  6. Data Integration
  7. Data Migration
  8. Data Modelling
  9. Data Privacy
  10. Data Repository
  11. Data Virtualisation
  12. Deep Learning
  13. Flink
  14. Hive
  15. Information Security
  16. Metadata
  17. Multidimensional Approach
  18. Natural Language Processing (NLP)
  19. On-line Transaction Processing
  20. Operational Data Store (ODS)
  21. Pig
  22. Table
  23. Sentiment Analysis
  24. Text Analytics
  25. View

It is my intention to continue to revise this resource. Adding some more detail about Machine Learning and related areas is probably the next focus.

As ever, ideas for what to include next would be more than welcome (any suggestions used will also be acknowledged).
 


 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

A truth universally acknowledged…

£10 note

  “It is a truth universally acknowledged, that an organisation in possession of some data, must be in want of a Chief Data Officer”

— Growth and Governance, by Jane Austen (1813) [1]

 

I wrote about a theoretical job description for a Chief Data Officer back in November 2015 [2]. While I have been on “paternity leave” following the birth of our second daughter, a couple of genuine CDO job specs landed in my inbox. While unable to respond for the aforementioned reasons, I did leaf through the documents. Something immediately struck me; they were essentially wish-lists covering a number of data-related fields, rather than a description of what a CDO might actually do. Clearly I’m not going to cite the actual text here, but the following is representative of what appeared in both requirement lists:

CDO wishlist

Mandatory Requirements:

Highly Desirable Requirements:

  • PhD in Mathematics or a numerical science (with a strong record of highly-cited publications)
  • MBA from a top-tier Business School
  • TOGAF certification
  • PRINCE2 and Agile Practitioner
  • Invulnerability and X-ray vision [3]
  • Mastery of the lesser incantations and a cloak of invisibility [3]
  • High midi-chlorian reading [3]
  • Full, clean driving licence

Your common, all-garden CDO

The above list may have descended into farce towards the end, but I would argue that the problems started to occur much earlier. The above is not a description of what is required to be a successful CDO, it’s a description of a Swiss Army Knife. There is also the minor practical point that, out of a World population of around 7.5 billion, there may well be no one who ticks all the boxes [4].

Let’s make the fallacy of this type of job description clearer by considering what a simmilar approach would look like if applied to what is generally the most senior role in an organisation, the CEO. Whoever drafted the above list of requirements would probably characterise a CEO as follows:

  • The best salesperson in the organisation
  • The best accountant in the organisation
  • The best M&A person in the organisation
  • The best customer service operative in the organisation
  • The best facilities manager in the organisation
  • The best janitor in the organisation
  • The best purchasing clerk in the organisation
  • The best lawyer in the organisation
  • The best programmer in the organisation
  • The best marketer in the organisation
  • The best product developer in the organisation
  • The best HR person in the organisation, etc., etc., …

Of course a CEO needs to be none of the above, they need to be a superlative leader who is expert at running an organisation (even then, they may focus on plotting the way forward and leave the day to day running to others). For the avoidance of doubt, I am not saying that a CEO requires no domain knowledge and has no expertise, they would need both, however they don’t have to know every aspect of company operations better than the people who do it.

The same argument applies to CDOs. Domain knowledge probably should span most of what is in the job description (save for maybe the three items with footnotes), but knowledge is different to expertise. As CDOs don’t grow on trees, they will most likely be experts in one or a few of the areas cited, but not all of them. Successful CDOs will know enough to be able to talk to people in the areas where they are not experts. They will have to be competent at hiring experts in every area of a CDO’s purview. But they do not have to be able to do the job of every data-centric staff member better than the person could do themselves. Even if you could identify such a CDO, they would probably lose their best staff very quickly due to micromanagement.

Conducting the data orchestra

A CDO has to be a conductor of both the data function orchestra and of the use of data in the wider organisation. This is a talent in itself. An internationally renowned conductor may have previously been a violinist, but it is unlikely they were also a flautist and a percussionist. They do however need to be able to tell whether or not the second trumpeter is any good or not; this is not the same as being able to play the trumpet yourself of course. The conductor’s key skill is in managing the efforts of a large group of people to create a cohesive – and harmonious – whole.

The CDO is of course still a relatively new role in mainstream organisations [5]. Perhaps these job descriptions will become more realistic as the role becomes more familiar. It is to be hoped so, else many a search for a new CDO will end in disappointment.

Having twisted her text to my own purposes at the beginning of this article, I will leave the last words to Jane Austen:

  “A scheme of which every part promises delight, can never be successful; and general disappointment is only warded off by the defence of some little peculiar vexation.”

— Pride and Prejudice, by Jane Austen (1813)

 

 
Notes

 
[1]
 
Well if a production company can get away with Pride and Prejudice and Zombies, then I feel I am on reasonably solid ground here with this title.

I also seem to be riffing on JA rather a lot at present, I used Rationality and Reality as the title of one of the chapters in my [as yet unfinished] Mathematical book, Glimpses of Symmetry.

 
[2]
 
Wanted – Chief Data Officer.
 
[3]
 
Most readers will immediately spot the obvious mistake here. Of course all three of these requirements should be mandatory.
 
[4]
 
To take just one example, gaining a PhD in a numerical science, a track record of highly-cited papers and also obtaining an MBA would take most people at least a few weeks of effort. Is it likely that such a person would next focus on a PRINCE2 or TOGAF qualification?
 
[5]
 
I discuss some elements of the emerging consensus on what a CDO should do in: 5 Themes from a Chief Data Officer Forum and 5 More Themes from a Chief Data Officer Forum.

 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

The peterjamesthomas.com Data and Analytics Dictionary

The Data and Analytics Dictionary

I find myself frequently being asked questions around terminology in Data and Analytics and so thought that I would try to define some of the more commonly used phrases and words. My first attempt to do this can be viewed in a new page added to this site (this also appears in the site menu):

The Data and Analytics Dictionary

I plan to keep this up-to-date as the field continues to evolve.

I hope that my efforts to explain some concepts in my main area of specialism are both of interest and utility to readers. Any suggestions for new entries or comments on existing ones are more than welcome.
 

 

Do any technologies grow up or do they only come of age?

The 2016 Big Data Maturity Survey (by AtScale)

I must of course start by offering my apologies to that doyen of data experts, Stephen King, for mangling his words to suit the purposes of this article [1].

The AtScale Big Data Maturity Survey for 2016 came to my attention through a connection (see Disclosure below). The survey covers “responses from more than 2,550 Big Data professionals, across more than 1,400 companies and 77 countries” and builds on their 2015 survey.

I won’t use the word clickbait [2], but most of the time documents like this lead you straight to a form where you can add your contact details to the organisation’s marketing database. Indeed you, somewhat inevitably, have to pay the piper to read the full survey. However AtScale are to be commended for at least presenting some of the high-level findings before asking you for the full entry price.

These headlines appear in an article on their blog. I won’t cut and paste the entire text, but a few points that stood out for me included:

  1. Close to 70% [of respondents] have been using Big Data for more than a year (vs. 59% last year)
     
  2. More than 53% of respondents are using Cloud for their Big Data deployment today and 14% of respondents have all their Big Data in the Cloud
     
  3. Business Intelligence is [the] #1 workload for Big Data with 75% of respondents planning on using BI on Big Data
     
  4. Accessibility, Security and Governance have become the fastest growing areas of concern year-over-year, with Governance growing most at 21%
     
  5. Organizations who have deployed Spark [3] in production are 85% more likely to achieve value

Bullet 3 is perhaps notable as Big Data is often positioned – perhaps erroneously – as supporting analytics as opposed to “traditional BI” [4]. On the contrary, it appears that a lot of people are employing it in very “traditional” ways. On reflection this is hardly surprising as many organisations have as yet failed to get the best out of the last wave of information-related technology [5], let alone the current one.

However, perhaps the two most significant trends are the shift from on-premises Big Data to Cloud Big Data and the increased importance attached to Data Governance. The latter was perhaps more of a neglected area in the earlier and more free-wheeling era of Big Data. The rise in concerns about Big Data Governance is probably the single greatest pointer towards the increasing maturity of the area.

It will be interesting to see what the AtScale survey of 2017 has to say in 12 months.
 


 
Disclosure:

The contact in question is Bruno Aziza (@brunoaziza), AtScale’s Chief Marketing Officer. While I have no other connections with AtScale, Bruno and I did make the following video back in 2011 when both of us were at other companies.


 
Notes

 
[1]
 
Excerpted from The Gunslinger.
 
[2]
 
Oops!
 
[3]
 
Apache Hadoop – which has become almost synonymous with Big Data – has two elements, the Hadoop Distributed File Store (HDFS, the piece which deals with storage) and MapReduce (which does processing of data). Apache Spark was developed to improve upon the speed of the MapReduce approach where the same data is accessed many times, as can happen in some queries and algorithms. This is achieved in part by holding some or all of the data to be accessed in memory. Spark works with HDFS and also other distributed file systems, such as Apache Cassandra.
 
[4]
 
How phrases from the past come around again!
 
[5]
 
Some elements of the technology have changed, but the vast majority of the issues I covered in “Why Business Intelligence projects fail” hold as true today as they did back in 2009 when I wrote this piece.

 

 

Bumps in the Road

Bumps in the road

The above image appears in my updated [1] seminar deck Data Management, Analytics and People: An Eternal Golden Braid. It is featured on a slide titled “Why Data Management? – The negative case” [2]. So what was the point that I was so keen to make?

Well the whole slide looks like this…

Why Data Management? (Click to view a full-size version as a PDF in a new window).

…and the image on the left relates most directly to the last item of bulleted text on the right-hand side [3].
 
 
An Introductory Anecdote

Roadworks

Before getting into the meat of this article, an aside which may illuminate where I am coming from. I currently live in London, a city where I was born and to which I returned after a sojourn in Cambridge while my wife completed her PhD. Towards the end of my first period in London, we lived on a broad, but one-way road in West London. One day we received notification that the road was going to be resurfaced and moving our cars might be a useful thing to consider. The work was duly carried out and our road now had a deep black covering of fresh asphalt [4], criss-crossed by gleaming and well-defined dashed white lines demarking parking bays. Within what seemed like days, but was certainly no more than a few weeks, roadworks signs reappeared on our road, together with red and white fencing, a digger and a number of people with pneumatic drills [5] and shovels. If my memory serves me well, it was the local water company (Thames Water) who visited our road first.

The efforts of the Thames Water staff, while no doubt necessary and carried out professionally, rather spoiled our pristine road cover. I guess these things happen and coordination between local government, private firms and the sub-contractors that both employ cannot be easy [6]. However what was notable was that things did not stop with Thames Water. Over the next few months the same stretch of road was also dug up by both the Electricity and Gas utilities. There was a further set of roadworks on top of these, but my memory fails me on which organisation carried these out and for what purpose [7]; we are talking about events that occurred over eight years ago here.

More roadworks

The result of all this uncoordinated work was a previously pristine road surface now pock-marked by a series of new patches of asphalt, or maybe other materials; they certainly looked different and (as in the above photo) had different colours and grains. Several of these patches of new road covering overlapped each other; that is one hole redug sections previously excavated by earlier holes. Also the new patches of road surface were often either raised or depressed from the main run of asphalt, leading to a very uneven terrain. I have no idea how much it cost to repave the road in the first instance, but a few months of roadworks pretty much buried the repaving and led to a road whose surface was the opposite of smooth and consistent. I’d go so far as to say that the road was now in considerably worse condition than before the initial repaving. In any case, it could be argued that the money spent on the repaving was, for all intents and purposes, wasted.

After all this activity, our road was somewhat similar to the picture at the top of this article, but its state was much worse with more extensive patching and more overlapping layers. To this day I rather wish I had taken a photograph, which would also have saved me some money on stock photos!

I understand that each of the roadworks was in support of something that was probably desirable. For example, better sewerage, or maintenance to gas supplies which might otherwise have become dangerous. My assumption is that all of the work that followed on from the repaving needed to be done and that each was done at least as well as it had to be. Probably most of these works were completed on time and on budget. However, from the point of view of the road as a whole, the result of all these unconnected and uncoordinated works was a substantial deterioration in both its appearance and utility.
Lots of good can equal bad (for certain values of 'good')
In summary, the combination of a series of roadworks, each of which either needed to be done or led to an improvement in some area, resulted in the environment in which they were carried out becoming degraded and less fit-for-purpose. A series of things which could be viewed as beneficial in isolation were instead deleterious in aggregate. At this point, the issue that I wanted to highlight in the data world is probably swimming into focus for many readers.
 
 
The Entropy of a Data Asset exposed to Change tends to a Maximum [8]

Entropy

Returning to the slide I reproduce above, my assertion – which has been borne out during many years of observing the area – is that Change Programmes and Projects, if not subject to appropriately rigorous Data Governance, inevitably led to the degradation of data assets over time.

Here both my roadworks anecdote and the initial photograph illustrate the point that I am looking to make. Over the last decade or so, the delivery of technological change has evolved [9] to the point where many streams of parallel work are run independently of each other with each receiving very close management scrutiny in order to ensure delivery on-time and on-budget [10]. It should be recognised that some of this shift in modus operandi has been as a result of IT departments running projects that have spiralled out of control, or where delivery has been significantly delayed or compromised. The gimlet-like focus of Change on delivery “come Hell or High-water” represents the pendulum swinging to the other extreme.

Pendulum

What this shift in approach means in practice is that – as is often the case – when things go wrong or take longer than anticipated [11], areas of work are de-scoped to secure delivery dates. In my experience, 9 times out of 10 one of the things that gets thrown out is data-related work; be that not bothering to develop reporting on top of new systems, not integrating new data into existing repositories, not complying with data standards, or not implementing master data management.

As well as the danger of skipping necessary data related work, if some data-related work is actually undertaken, then corners may be cut to meet deadlines and budgets. It is not atypical for instance that a Change Programme, while adding their new capabilities to interfaces or ETL, compromises or overwrites existing functionality. This can mean that data-centric code is in a worse state after a Change Programme than before. My roadworks anecdote begins to feel all too apt a metaphor to employ.

Looking more broadly at Change Programmes, even without the curse of de-scopes, their focus is seldom data and the expertise of Change staff is not often in data matters. Because of this, such work can indeed seem to be analogous to continually digging up the same stretch of road for different purposes, combined with patching things up again in a manner that can sometimes be barely adequate. Extending our metaphor [12], the result of Change that is not controlled from a data point of view can be a landscape with lumps, bumps and pot-holes. Maybe the sewer was re-laid on time and to budget, but the road has been trashed in the process. Perhaps a new system was shoe-horned in to production, but rendered elements of an Analytical Repository useless in the process.

Data Governance (well actually Bank Governance, Data Governance involves less impressive facades)

Avoiding these calamities is the central role of Data Governance. What these examples also stress is that, rather than the dry, policy-based area that Data Governance is often assumed to be, it must be more dynamic and much more engaged in Change Portfolios. Such engagement should ideally be early and in a helpful manner, not late and in a policing role.

The analogy I have employed here also explains why leveraging existing Governance arrangements to add in a Data Governance dimension seldom works. This would be like asking the contractors engaged in roadworks to be extra careful to liaise with each other. This won’t work as there is no real incentive for such collaboration, the motivation of getting their piece of work done quickly and cheaply will trump other considerations. Instead some independent oversight is required. Like any good “regulator” this will work best if Data Governance professionals seek to be part of the process and focus on improving it. The alternative of simply pointing out problems after the fact adds much less business value.
 
 
And Finally

Sherlock

In A Study in Scarlet John Watson reads an article, which turns out to have been written by his illustrious co-lodger. A passage is as follows:

“From a drop of water,” said the writer, “a logician could infer the possibility of an Atlantic or a Niagara without having seen or heard of one or the other. So all life is a great chain, the nature of which is known whenever we are shown a single link of it.”

While I don’t claim to have the same acuity of mind as Conan-Doyle’s most famous creation, I can confirm that you can learn a lot about the need for Data Governance by simply closely observing the damage done by roadworks.
 


 Notes

 
[1]
 
I have updated my latest deck to use a different photo due to a dispute with the company I purchased the original photo from.
 
[2]
 
Which you may be glad to hear is followed directly by one titled “Why Data Management? – The positive case”.
 
[3]
 
It may be noted that I am going through a minimalist phase in my decks for public speaking. Indeed I did toy with having a deck consisting primarily of images before chickening out. Of course one benefit of being text-light is that you can focus on different elements and tell different stories for different audiences (see Presenting in Public).
 
[4]
 
Blacktop.
 
[5]
 
Jackhammers.
 
[6]
 
Indeed sometime in the late 1980s or early 1990s I was approached by one of the big consultancies about a job on a project to catalogue all proposed roadworks across London in an Oracle database. The objective of this was to better coordinate roadworks. I demurred and I believe that the project was unsuccessful, certainly by the evidence of what happened to our road.
 
[7]
 
It could well have been Thames Water again – the first time sewers, the second household water supply. It might have been British Telecom, but it probably wasn’t a cable company as they had been banned from excavations in Westminster after failing to make good after previous installations.
 
[8]
 
Rudolf Clausius in 1865, with reference to the Second Law of Thermodynamics.
 
[9]
 
As with the last time I used this word (see the notes section of Alphabet Soup) and also as applies with the phenomenon in the narual world, evolution implies change, but not necessarily always improvement.
 
[10]
 
Or perhaps more realistically to ensure that delays are minimised and cost overruns managed downwards.
 
[11]
 
Frequently it must be added because of either insufficient, or the wrong type of up-front analysis, or because a delivery timeframe was agreed based on some external factor rather than on what could practically be delivered in the time available. Oftentimes both factors are present and compound each other. The overall timetable is not based on any concrete understanding of what is to be done and analysis is either curtailed to meet timeframes, or – more insidiously – its findings are massaged to fit the desired milestones.
 
[12]
 
Hopefully not over-extending it.