# Put our Knowledge and Writing Skills to Work for you

As well as consultancy, research and interim work, peterjamesthomas.com Ltd. helps organisations in a number of other ways. The recently launched Data Strategy Review Service is just one example.

Another service we provide is writing White Papers for clients. Sometimes the labels of these are white [1] as well as the paper. Sometimes Peter James Thomas is featured as the author. White Papers can be based on themes arising from articles published here, they can feature findings from de novo research commissioned in the data arena, or they can be on a topic specifically requested by the client.

Seattle-based Data Consultancy, Neal Analytics, is an organisation we have worked with on a number of projects and whose experience and expertise dovetails well with our own. They recently commissioned a White Paper expanding on our 2018 article, Building Momentum – How to begin becoming a Data-driven Organisation. The resulting paper, The Path to Data-Driven, has just been published on Neal Analytics’ site (they have a lot of other interesting content, which I would recommend checking out):

If you find the articles published on this site interesting and relevant to your work, then perhaps – like Neal Analytics – you would consider commissioning us to write a White Paper or some other document. If so, please just get in contact, or simply schedule an introductory ‘phone call. We have a degree of flexibility on the commercial side and will most likely be able to come up with an approach that fits within your budget. Although we are based in the UK, commissions – like Neal Analytics’s one – from organisations based in other countries are welcome.

Notes

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# A Picture Paints a Thousand Numbers

Introduction

The recent update of The Data & Analytics Dictionary featured an entry on Charts. Entries in The Dictionary are intended to be relatively brief [1] and also the layout does not allow for many illustrations. Given this, I have used The Dictionary entries as a basis for this slightly expanded article on the subject of chart types.

A Chart is a way to organise and Visualise Data with the general objective of making it easier to understand and – in particular – to discern trends and relationships. This article will cover some of the most frequently used Chart types, which appear in alphabetical order.

 Note:   Here an “axis” is a fixed reference line (sometimes invisible for stylistic reasons) which typically goes vertically up the page or horizontally from left to right across the page (but see also Radar Charts). Categories and values (see below) are plotted on axes. Most charts have two axes. Throughout I use the word “category” to refer to something discrete that is plotted on an axis, for example France, Germany, Italy and The UK, or 2016, 2017, 2018 and 2019. I use the word “value” to refer to something more continuous plotted on an axis, such as sales or number of items etc. With a few exceptions, the Charts described below plot values against categories. Both Bubble Charts and Scatter Charts plot values against other values. I use “series” to mean sets of categories and values. So if the categories are France, Germany, Italy and The UK; and the values are sales; then different series may pertain to sales of different products by country.

Index

Bar & Column Charts
Clustered Bar Charts, Stacked Bar Charts

Bar Charts is the generic term, but this is sometimes reserved for charts where the categories appear on the vertical axis, with Column Charts being those where categories appear on the horizontal axis. In either case, the chart has a series of categories along one axis. Extending righwards (or upwards) from each category is a rectangle whose width (height) is proportional to the value associated with this category. For example if the categories related to products, then the size of rectangle appearing against Product A might be proportional to the number sold, or the value of such sales.

|  © JMB (2014)  |  Used under a Creative Commons licence  |

The exhibit above, which is excerpted from Data Visualisation – A Scientific Treatment, is a compound one in which two bar charts feature prominently.

Sometimes the bars are clustered to allow multiple series to be charted side-by-side, for example yearly sales for 2015 to 2018 might appear against each product category. Or – as above – sales for Product A and Product B may both be shown by country.

Another approach is to stack bars or columns on top of each other, something that is sometimes useful when comparing how the make-up of something has changed.

Bubble Charts

Bubble Charts are used to display three dimensions of data on a two dimensional chart. A circle is placed with its centre at a value on the horizontal and vertical axes according to the first two dimensions of data, but then then the area (or less commonly the diameter [2]) of the circle reflects the third dimension. The result is reminiscent of a glass of champagne (then maybe this says more about the author than anything else).

You can also use bubble charts in a quite visceral way, as exemplified by the chart above. The vertical axis plots the number of satellites of the four giant planets in the Solar System. The horizontal axis plots the closest that they ever come to the Sun. The size of the planets themselves is proportional to their relative sizes.

Cartograms

There does not seem to be a generally accepted definition of Cartograms. Some authorities describe them as any diagram using a map to display statistical data; I cover this type of general chart in Map Charts below. Instead I will define a Cartogram more narrowly as a geographic map where areas of map sections are changed to be proportional to some other value; resulting in a distorted map. So, in a map of Europe, the size of countries might be increased or decreased so that their new areas are proportional to each country’s GDP.

Alternatively the above cartogram of the United States has been distorted (and coloured) to emphasise the population of each state. The dark blue of California and the slightly less dark blues of Texas, Florida and New York dominate the map.

Histograms

A type of Bar Chart (typically with categories along the horizontal axis) where the categories are bins (or buckets) and the bars are proportional to the number of items falling into a bin. For example, the bins might be ranges of ages, say 0 to 19, 20 to 39, 30 to 49 and 50+ and the bars appearing against each might be the UK female population falling into each bin.

The diagram above is a bipartite quasi-histogram [3] that I created to illustrate another article. It is not a true histogram as it shows percentages for and against in each bin rather than overall frequencies.

In the same article, I addressed this shortcoming with a second view of the same data, which is more histogram-like (apart from having a total category) and appears above. The point that I was making related to how Data Visualisation can both inform and mislead depending on the presentational choices taken.

Line Charts
Fan Charts, Area Charts

These typically have categories across the horizontal axis and could be considered as a set of line segments joining up the tops of what would be the rectangles on a Bar Chart. Clearly multiple lines, associated with multiple series, can be plotted simultaneously without the need to cluster rectangles as is required with Bar Charts. Lines can also be used to join up the points on Scatter Charts assuming that these are sufficiently well ordered to support this.

Adaptations of Line Charts can also be used to show the probability of uncertain future events as per the exhibit above. The single red line shows the actual value of some metric up to the middle section of the chart. Thereafter it is the central prediction of a range of possible values. Lying above and below it are shaded areas which show bands of probability. For example it may be that the probability of the actual value falling within the area that has the darkest shading is 50%. A further example is contained in Limitations of Business Intelligence. Such charts are sometimes called Fan Charts.

Another type of Line Chart is the Area Chart. If we can think of a regular Line Chart as linking the tops of an invisible Bar Chart, then an Area Chart links the tops of an invisible Stacked Bar Chart. The effect is that how a band expands and contracts as we move across the chart shows how the contribution this category makes to the whole changes over time (or whatever other category we choose for the horizontal axis).

See also: The first exhibit in New Thinking, Old Thinking and a Fairytale

Map Charts

These place data on top of geographic maps. If we consider the canonical example of a map of the US divided into states, then the degree of shading of each state could be proportional to some state-related data (e.g. average income quartile of residents). Or more simply, figures could appear against each state. Bubbles could be placed at the location of major cities (or maybe a bubble per country or state etc.) with their size relating to some aspect of the locale (e.g.population). An example of this approach might be a map of US states with their relative populations denoted by Bubble area.

Also data could be overlaid on a map, for example – as shown above – coloured bands corresponding to different intensities of rainfall in different areas. This exhibit is excerpted from Hurricanes and Data Visualisation: Part I – Rainbow’s Gravity.

Pie Charts

These circular charts normally display a single series of categories with values, showing the proportion each category contributes to the total. For example a series might be the nations that make up the United Kingdom and their populations: England 55.62 million people, Scotland 5.43 million, Wales 3.13 million and Northern Ireland 1.87 million.

The whole circle represents the total of all the category values (e.g. the UK population of 66.05 million people [4]). The ratio of a segment’s angle to 360° (i.e. the whole circle) is equal to the percentage of the total represented by the linked category’s value (e.g. Scotland is 8.2% of the UK population and so will have a segment with an angle of just under 30°).

Sometimes – as illustrated above – the segments are “exploded”away from each other. This is taken from the same article as the other voting analysis exhibits.

See also: As Nice as Pie, which examines the pros and cons of this type of chart in some depth.

Radar Charts are used to plot one or more series of categories with values that fall into the same range. If there are six categories, then each has its own axis called a radius and the six of these radiate at equal angles from a central point. The calibration of each radial axis is the same. For example Radar Charts are often used to show ratings (say from 5 = Excellent to 1 = Poor) so each radius will have five points on it, typically with low ratings at the centre and high ones at the periphery. Lines join the values plotted on each adjacent radius, forming a jagged loop. Where more than one series is plotted, the relative scores can be easily compared. A sense of aggregate ratings can also be garnered by seeing how much of the plot of one series lies inside or outside of another.

I use Radar Charts myself extensively when assessing organisations’ data capabilities. The above exhibit shows how an organisation ranks in five areas relating to Data Architecture compared to the best in their industry sector [5].

Scatter Charts

In most of the cases we have dealt with to date, one axis has contained discrete categories and the other continuous values (though our rating example for the Radar Chart) had discrete categories and values). For a Scatter Chart both axes plot values, either continuous or discrete. A series would consist of a set of pairs of values, one to plotted on the horizontal axis and one to be plotted on the vertical axis. For example a series might be a number of pairs of midday temperature (to be plotted on the horizontal axis) and sales of ice cream (to be plotted on the vertical axis). As may be deduced from the example, often the intention is to establish a link between the pairs of values – do ice cream sales increase with temperature? This aspect can be highlighted by drawing a line of best fit on the chart; one that minimises the total distance between each plotted point and the line. Further series, say sales of coffee versus midday temperature can be added.

Here is a further example, which illustrates potential correlation between two sets of data, one on the x-axis and the other on the y-axis:

As always a note of caution must be introduced when looking to establish correlations using scatter graphs. The inimitable Randall Munroe of xkcd.com [7] explains this pithility as follows:

|  © Randall Munroe, xkcd.com (2009)  |  Excerpted from: Extrapolating  |

Tree Maps

Tree Maps require a little bit of explanation. The best way to understand them is to start with something more familiar, a hierarchy diagram with three levels (i.e. something like an organisation chart). Consider a cafe that sells beverages, so we have a top level box labeled Beverages. The Beverages box splits into Hot Beverages and Cold Beverages at level 2. At level 3, Hot Beverages splits into Tea, Coffee, Herbal Tea and Hot Chocolate; Cold Beverages splits into Still Water, Sparkling Water, Juices and Soda. So there is one box at level 1, two at level 2 and eight at level 3. As ever a picture paints a thousand words:

Next let’s also label each of the boxes with the value of sales in the last week. If you add up the sales for Tea, Coffee, Herbal Tea and Hot Chocolate we obviously get the sales for Hot Beverages.

A Tree Map takes this idea and expands on it. A Tree Map using the data from our example above might look like this:

First, instead of being linked by lines, boxes at level 3 (leaves let’s say) appear within their parent box at level 2 (branches maybe) and the level 2 boxes appear within the overall level 1 box (the whole tree); so everything is nested. Sometimes, as is the case above, rather than having the level 2 boxes drawn explicitly, the level 3 boxes might be colour coded. So above Tea, Coffee, Herbal Tea and Hot Chocolate are mid-grey and the rest are dark grey.

Next, the size of each box (at whatever level) is proportional to the value associated with it. In our example, 66.7% of sales ($\frac{1000}{1500}$) are of Hot Beverages. Then two-thirds of the Beverages box will be filled with the Hot Beverages box and one-third ($\frac{500}{1500}$) with the Cold Beverage box. If 20% of Cold Beverages sales ($\frac{100}{500}$) are Still Water, then the Still Water box will fill one fifth of the Cold Beverages box (or one fifteenth – $\frac{100}{1500}$ – of the top level Beverages box).

It is probably obvious from the above, but it is non-trivial to find a layout that has all the boxes at the right size, particularly if you want to do something else, like have the size of boxes increase from left to right. This is a task generally best left to some software to figure out.

In Closing

The above review of various chart types is not intended to be exhaustive. For example, it doesn’t include Waterfall Charts [8], Stock Market Charts (or Open / High / Low / Close Charts [9]), or 3D Surface Charts [10] (which seldom are of much utility outside of Science and Engineering in my experience). There are also a number of other more recherché charts that may be useful in certain niche areas. However, I hope we have covered some of the more common types of charts and provided some helpful background on both their construction and usage.

Notes

 [1] Certainly by my normal standards! [2] Research suggests that humans are more attuned to comparing areas of circles than say their diameters. [3] © peterjamesthomas.com Ltd. (2019). [4] Excluding overseas territories. [5] This has been suitably redacted of course. Typically there are four other such exhibits in my assessment pack: Data Strategy, Data Organisation, MI & Analytics and Data Controls, together with a summary radar chart across all five lower level ones. [6] The atmospheric CO2 records were sourced from the US National Oceanographic and Atmospheric Administration’s Earth System Research Laboratory and relate to concentrations measured at their Mauna Loa station in Hawaii. The Global Average Surface Temperature records were sourced from the Earth Policy Institute, based on data from NASA’s Goddard Institute for Space Studies and relate to measurements from the latter’s Global Historical Climatology Network. This exhibit is meant to be a basic illustration of how a scatter chart can be used to compare two sets of data. Obviously actual climatological research requires a somewhat more rigorous approach than the simplistic one I have employed here. [7] Randall’s drawings are used (with permission) liberally throughout this site,Including: [8] Waterfall Chart – Wikipedia. [9] Open-High-Low-Close Chart – Wikipedia. [10] Surface Chart – AnyCharts.

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# The peterjamesthomas.com Data Strategy Hub

Today we launch a new on-line resource, The Data Strategy Hub. This presents some of the most popular Data Strategy articles on this site and will expand in coming weeks to also include links to articles and other resources pertaining to Data Strategy from around the Internet.

If you have an article you have written, or one that you read and found helpful, please post a link in a comment here or in the actual Data Strategy Hub and I will consider adding it to the list.

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# Data Visualisation according to a Four-year-old

When I recently published the latest edition of The Data & Analytics Dictionary, I included an entry on Charts which briefly covered a number of the most frequently used ones. Given that entries in the Dictionary are relatively brief [1] and that its layout allows little room for illustrations, I decided to write an expanded version as an article. This will be published in the next couple of weeks.

One of the exhibits that I developed for this charts article was to illustrate the use of Bubble Charts. Given my childhood interest in Astronomy, I came up with the following – somewhat whimsical – exhibit:

Bubble Charts are used to plot three dimensions of data on a two dimensional graph. Here the horizontal axis is how far each of the gas and ice giants is from the Sun [2], the vertical axis is how many satellites each planet has [3] and the final dimension – indicated by the size of the “bubbles” – is the actual size of each planet [4].

Anyway, I thought it was a prettier illustration of the utility of Bubble Charts that the typical market size analysis they are often used to display.

However, while I was doing this, my older daughter wandered into my office and said “look at the picture I drew for you Daddy” [5]. Coincidentally my muse had been her muse and the result is the Data Visualisation appearing at the top of this article. Equally coincidentally, my daughter had also encoded three dimensions of data in her drawing:

1. Rank of distance from the Sun
2. Colour / appearance
3. Number of satellites [6]

She also started off trying to capture relative size. After a great start with Mercury, Venus and Earth, she then ran into some Data Quality issues with the later planets (she is only four).

Here is an annotated version:

I think I’m at least OK at Data Visualisation, but my daughter’s drawing rather knocked mine into a cocked hat [7]. And she included a comet, which makes any Data Visualisation better in my humble opinion; what Chart would not benefit from the inclusion of a comet?

Notes

 [1] For me at least that is. [2] Actually the measurement is the closest that each planet comes to the Sun, its perihelion. [3] This may seem a somewhat arbitrary thing to plot, but a) the exhibit is meant to be illustrative only and b) there does nevertheless seem to be a correlation of sorts; I’m sure there is some Physical reason for this, which I’ll have to look into sometime. [4] Bubble Charts typically offer the option to scale bubbles such that either their radius / diameter or their area is in proportion to the value to be displayed. I chose the equatorial radius as my metric. [5] It has to be said that this is not an atypical occurence. [6] For at least the four rocky planets, it might have taken a while to draw all 79 of Jupiter’s moons. [7] I often check my prose for phrases that may be part of British idiom but not used elsewhere. In doing this, I learnt today that “knock into a cocked hat” was originally an American phrase; it is first found in the 1830s.

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# The latest edition of The Data & Analytics Dictionary is now out

After a hiatus of a few months, the latest version of the peterjamesthomas.com Data and Analytics Dictionary is now available. It includes 30 new definitions, some of which have been contributed by people like Tenny Thomas Soman, George Firican, Scott Taylor and and Taru Väre. Thanks to all of these for their help.

Remember that The Dictionary is a free resource and quoting contents (ideally with acknowledgement) and linking to its entries (via the buttons provided) are both encouraged.

If you would like to contribute a definition, which will of course be acknowledged, you can use the comments section here, or the dedicated form, we look forward to hearing from you [1].

The Data & Analytics Dictionary will continue to be expanded in coming months.

Notes

 [1] Please note that any submissions will be subject to editorial review and are not guaranteed to be accepted.

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# In praise of Jam Doughnuts or: How I learned to stop worrying and love Hybrid Data Organisations

The above infographic is the work of Management Consultants Oxbow Partners [1] and employs a novel taxonomy to categorise data teams. First up, I would of course agree with Oxbow Partners’ statement that:

Organisation of data teams is a critical component of a successful Data Strategy

Indeed I cover elements of this in two articles [2]. So the structure of data organisations is a subject which, in my opinion, merits some consideration.

Oxbow Partners draw distinctions between organisations where the Data Team is separate from the broader business, ones where data capabilities are entirely federated with no discernible “centre” and hybrids between the two. The imaginative names for these are respectively The Burger, The Smoothie and The Jam Doughnut. In this article, I review Oxbow Partners’s model and offer some of my own observations.

The Burger – Centralised

Having historically recommended something along the lines of The Burger, not least when an organisation’s data capabilities are initially somewhere between non-existent and very immature, my views have changed over time, much as the characteristics of the data arena have also altered. I think that The Burger still has a role, in particular, in a first phase where data capabilities need to be constructed from scratch, but it has some weaknesses. These include:

1. The pace of change in organisations has increased in recent years. Also, many organisations have separate divisions or product lines and / or separate geographic territories. Change can be happening in sometimes radically different ways in each of these as market conditions may vary considerably between Division A’s operations in Switzerland and Division B’s operations in Miami. It is hard for a wholly centralised team to react with speed in such a scenario. Even if they are aware of the shifting needs, capacity may not be available to work on multiple areas in parallel.

2. Again in the above scenario, it is also hard for a central team to develop deep expertise in a range of diverse businesses spread across different locations (even if within just one country). A central team member who has to understand the needs of 12 different business units will necessarily be at a disadvantage when considering any single unit compared to a colleague who focuses on that unit and nothing else.

3. A further challenge presented here is maintaining the relationships with colleagues in different business units that are typically a prerequisite for – for example – driving adoption of new data capabilities.

The Smoothie – Federated

So – to address these shortcomings – maybe The Smoothie is a better organisational design. Well maybe, but also maybe not. Problems with these arrangements include:

1. Probably biggest of all, it is an extremely high-cost approach. The smearing out of work on data capabilities inevitably leads to duplication of effort with – for example – the same data sourced or combined by different people in parallel. The pace of change in organisations may have increased, but I know few that are happy to bake large costs into their structures as a way to cope with this.

2. The same duplication referred to above creates another problem, the way that data is processed can vary (maybe substantially) between different people and different teams. This leads to the nightmare scenario where people spend all their time arguing about whose figures are right, rather than focussing on what the figures say is happening in the business [3]. Such arrangements can generate business risk as well. In particular, in highly regulated industries heterogeneous treatment of the same data tends to be frowned upon in external reviews.

3. The wholly federated approach also limits both opportunities for economies of scale and identification of areas where data capabilities can meet the needs of more than one business unit.

4. Finally, data resources who are fully embedded in different parts of a business may become isolated and may not benefit from the exchange of ideas that happens when other similar people are part of the immediate team.

So to summarise we have:

The Jam Doughnut – Hybrid

Which leaves us with The Jam Doughnut, in my opinion, this is a Goldilocks approach that captures as much as possible of the advantages of the other two set-ups, while mitigating their drawbacks. It is such an approach that tends to be my recommendation for most organisations nowadays. Let me spend a little more time describing its attributes.

I see the best way of implementing a Jam Doughnut approach is via a hub-and-spoke model. The hub is a central Data Team, the spokes are data-centric staff in different parts of the business (Divisions, Functions, Geographic Territories etc.).

It is important to stress that each spoke satellite is not a smaller copy of the central Data Team. Some roles will be more federated, some more centralised according to what makes sense. Let’s consider a few different roles to illustrate this:

• Data Scientist – I would see a strong central group of these, developing methodologies and tools, but also that many business units would have their own dedicated people; “spoke”-based people could also develop new tools and new approaches, which could be brought into the “hub” for wider dissemination

• Analytics Expert – Similar to the Data Scientists, centralised “hub” staff might work more on standards (e.g. for Data Visualisation), developing frameworks to be leveraged by others (e.g. a generic harness for dashboards that can be leveraged by “spoke” staff), or selecting tools and technologies; “spoke”-based staff would be more into the details of meeting specific business needs

• Data Engineer – Some “spoke” people may be hybrid Data Scientists / Data Engineers and some larger “spoke” teams may have dedicated Data Engineers, but the needle moves more towards centralisation with this role

• Data Architect – Probably wholly centralised, but some “spoke” staff may have an architecture string to their bow, which would of course be helpful

• Data Governance Analyst – Also probably wholly centralised, this is not to downplay the need for people in the “spokes” to take accountability for Data Governance and Data Quality improvement, but these are likely to be part-time roles in the “spokes”, whereas the “hub” will need full-time Data Governance people

It is also important to stress that the various spokes should also be in contact with each other, swapping successful approaches, sharing ideas and so on. Indeed, you could almost see the spokes beginning to merge together somewhat to form a continuum around the Data Team. Maybe the merged spokes could form the “dough”, with the Data Team being the “jam” something like this:

I label these types of arrangements a Data Community and this is something that I have looked to establish and foster in a few recent assignments. Broadly a Data Community is something that all data-centric staff would feel part of; they are obviously part of their own segment of the organisation, but the Data Community is also part of their corporate identity. The Data Community facilities best practice approaches, sharing of ideas, helping with specific problems and general discourse between its members. I will be revisiting the concept of a Data Community in coming weeks. For now I would say that one thing that can help it to function as envisaged is sharing common tooling. Again this is a subject that I will return to shortly.

I’ll close by thanking Oxbow Partners for some good mental stimulation – I will look forward to their next data-centric publication.

 Disclosure: It is peterjamesthomas.com’s policy to disclose any connections with organisations or individuals mentioned in articles. Oxbow Partners are an advisory firm for the insurance industry covering Strategy, Digital and M&A. Oxbow Partners and peterjamesthomas.com Ltd. have a commercial association and peterjamesthomas.com Ltd. was also engaged by one of Oxbow Partners’ principals, Christopher Hess, when he was at a former organisation.

Notes

 [1] Though the author might have had a minor role in developing some elements of it as well. [2] The Anatomy of a Data Function and A Simple Data Capability Framework. [3] See also The impact of bad information on organisations.

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# A Simple Data Capability Framework

Introduction

As part of my consulting business, I end up thinking about Data Capability Frameworks quite a bit. Sometimes this is when I am assessing current Data Capabilities, sometimes it is when I am thinking about how to transition to future Data Capabilities. Regular readers will also recall my tripartite series on The Anatomy of a Data Function, which really focussed more on capabilities than purely organisation structure [1].

Detailed frameworks like the one contained in Anatomy are not appropriate for all audiences. Often I need to provide a more easily-absorbed view of what a Data Function is and what it does. The exhibit above is one that I have developed and refined over the last three or so years and which seems to have resonated with a number of clients. It has – I believe – the merit of simplicity. I have tried to distil things down to the essentials. Here I will aim to walk the reader through its contents, much of which I hope is actually self-explanatory.

The overall arrangement has been chosen intentionally, the top three areas are visible activities, the bottom three are more foundational areas [2], ones that are necessary for the top three boxes to be discharged well. I will start at the top left and work across and then down.

Collation of Data to provide Information

This area includes what is often described as “traditional” reporting [3], Dashboards and analysis facilities. The Information created here is invaluable for both determining what has happened and discerning trends / turning points. It is typically what is used to run an organisation on a day-to-day basis. Absence of such Information has been the cause of underperformance (or indeed major losses) in many an organisation, including a few that I have been brought in to help. The flip side is that making the necessary investments to provide even basic information has been at the heart of the successful business turnarounds that I have been involved in.

The bulk of Business Intelligence efforts would also fall into this area, but there is some overlap with the area I next describe as well.

Leverage of Data to generate Insight

In this second area we have disciplines such as Analytics and Data Science. The objective here is to use a variety of techniques to tease out findings from available data (both internal and external) that go beyond the explicit purpose for which it was captured. Thus data to do with bank transactions might be combined with publically available demographic and location data to build an attribute model for both existing and potential clients, which can in turn be used to make targeted offers or product suggestions to them on Digital platforms.

It is my experience that work in this area can have a massive and rapid commercial impact. There are few activities in an organisation where a week’s work can equate to a percentage point increase in profitability, but I have seen insight-focussed teams deliver just that type of ground-shifting result.

Control of Data to ensure it is Fit-for-Purpose

This refers to a wide range of activities from Data Governance to Data Management to Data Quality improvement and indeed related concepts such as Master Data Management. Here as well as the obvious policies, processes and procedures, together with help from tools and technology, we see the need for the human angle to be embraced via strong communications, education programmes and aligning personal incentives with desired data quality outcomes.

The primary purpose of this important work is to ensure that the information an organisation collates and the insight it generates are reliable. A helpful by-product of doing the right things in these areas is that the vast majority of what is required for regulatory compliance is achieved simply by doing things that add business value anyway.

Data Architecture / Infrastructure

Best practice has evolved in this area. When I first started focussing on the data arena, Data Warehouses were state of the art. More recently Big Data architectures, including things like Data Lakes, have appeared and – at least in some cases – begun to add significant value. However, I am on public record multiple times stating that technology choices are generally the least important in the journey towards becoming a data-centric organisation. This is not to say such choices are unimportant, but rather that other choices are more important, for example how best to engage your potential users and begin to build momentum [4].

Having said this, the model that seems to have emerged of late is somewhat different to the single version of the truth aspired to for many years by organisations. Instead best practice now encompasses two repositories: the first Operational, the second Analytical. At a high-level, arrangements would be something like this:

The Operational Repository would contain a subset of corporate data. It would be highly controlled, highly reconciled and used to support both regular reporting and a large chunk of dashboard content. It would be designed to also feed data to other areas, notably Finance systems. This would be complemented by the Analytical Repository, into which most corporate data (augmented by external data) would be poured. This would be accessed by a smaller number of highly skilled staff, Data Scientists and Analytics experts, who would use it to build models, produce one off analyses and to support areas such as Data Visualisation and Machine Learning.

It is not atypical for Operational Repositories to be SQL-based and Analytical Repsoitories to be Big Data-based, but you could use SQL for both or indeed Big Data for both according to the circumstances of an organisation and its technical expertise.

Data Operating Model / Organisation Design

Here I will direct readers to my (soon to be updated) earlier work on The Anatomy of a Data Function. However, it is worth mentioning a couple of additional points. First an Operating Model for data must encompass the whole organisation, not just the Data Function. Such a model should cover how data is captured, sourced and used across all departments.

Second I think that the concept of a Data Community is important here, a web of like-minded Data Scientists and Analytics people, sitting in various business areas and support functions, but linked to the central hub of the Data Function by common tooling, shared data sets (ideally Curated) and aligned methodologies. Such a virtual data team is of course predicated on an organisation hiring collaborative people who want to be part of and contribute to the Data Community, but those are the types of people that organisations should be hiring anyway [5].

Data Strategy

Our final area is that of Data Strategy, something I have written about extensively in these pages [6] and a major part of the work that I do for organisations.

It is an oft-repeated truism that a Data Strategy must reflect an overarching Business Strategy. While this is clearly the case, often things are less straightforward. For example, the Business Strategy may be in flux; this is particularly the case where a turn-around effort is required. Also, how the organisation uses data for competitive advantage may itself become a central pillar of its overall Business Strategy. Either way, rather than waiting for a Business Strategy to be finalised, there are a number of things that will need to be part of any Data Strategy: the establishment of a Data Function; a focus on making data fit-for-purpose to better support both information and insight; creation of consistent and business-focussed reporting and analysis; and the introduction or augmentation of Data Science capabilities. Many of these activities can help to shape a Business Strategy based on facts, not gut feel.

More broadly, any Data Strategy will include: a description of where the organisation is now (threats and opportunities); a vision for commercially advantageous future data capabilities; and a path for moving between the current and the future states. Rather than being PowerPoint-ware, such a strategy needs to be communicated assiduously and in a variety of ways so that it can be both widely understood and form a guide for data-centric activities across the organisation.

Summary

As per my other articles, the data capabilities that a modern organisation needs are broader and more detailed than those I have presented here. However, I have found this simple approach a useful place to start. It covers all the basic areas and provides a scaffold off of which more detailed capabilities may be hung.

The framework has been informed by what I have seen and done in a wide range of organisations, but of course it is not necessarily the final word. As always I would be interested in any general feedback and in any suggestions for improvement.

Notes

 [1] In passing, Anatomy is due for its second refresh, which will put greater emphasis on Data Science and its role as an indispensable part of a modern Data Function. Watch this space. [2] Though one would hope that a Data Strategy is also visible! [3] Though nowadays you hear “traditional” Analytics and “traditional” Big Data as well (on the latter see Sic Transit Gloria Magnorum Datorum), no doubt “traditional” Machine Learning will be with us at some point, if it isn’t here already. [4] See also Building Momentum – How to begin becoming a Data-driven Organisation. [5] I will be revisiting the idea of a Data Community in coming months, so again watch this space. [6] Most explicitly in my three-part series:

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# A Retrospective of 2018’s Articles

This is the second year in which I have produced a retrospective of my blogging activity. As in 2017, I have failed miserably in my original objective of posting this early in January. Despite starting to write this piece on 18th December 2018, I have somehow sneaked into the second quarter before getting round to completing it. Maybe I will do better with 2019’s highlights!

Anyway, 2018 was a record-breaking year for peterjamesthomas.com. The site saw more traffic than in any other year since its inception; indeed hits were over a third higher than in any previous year. This increase was driven in part by the launch of my new Maths & Science section, articles from which claimed no fewer than 6 slots in the 2018 top 10 articles, when measured by hits [1]. Overall the total number of articles and new pages I published exceeded 2017’s figures to claim the second spot behind 2009; our first year in business.

As with every year, some of my work was viewed by tens of thousands of people, while other pieces received less attention. This is my selection of the articles that I enjoyed writing most, which does not always overlap with the most popular ones. Given the advent of the Maths & Science section, there are now seven categories into which I have split articles. These are as follows:

In each category, I will pick out one or two pieces which I feel are both representative of my overall content and worth a read. I would be more than happy to receive any feedback on my selections, or suggestions for different choices.

Notes

[1]

 The 2018 Top Ten by Hits 1. The Irrational Ratio 2. A Brief History of Databases 3. Euler’s Number 4. The Data and Analytics Dictionary 5. The Equation 6. A Brief Taxonomy of Numbers 7. When I’m 65 8. How to Spot a Flawed Data Strategy 9. Building Momentum – How to begin becoming a Data-driven Organisation 10. The Anatomy of a Data Function – Part I

Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.

# More Definitions in the Data and Analytics Dictionary

The peterjamesthomas.com Data and Analytics Dictionary is an active document and I will continue to issue revised versions of it periodically. Here are 20 new definitions, including the first from other contributors (thanks Tenny!):

Remember that The Dictionary is a free resource and quoting contents (ideally with acknowledgement) and linking to its entries (via the buttons provided) are both encouraged.

People are now also welcome to contribute their own definitions. You can use the comments section here, or the dedicated form. Submissions will be subject to editorial review and are not guaranteed to be accepted.

From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases

# In-depth with CDO Christopher Bannocks

Part of the In-depth series of interviews

 Today I am talking to Christopher Bannocks, who is Group Chief Data Officer at ING. ING is a leading global financial institution, headquartered in the Netherlands. As stressed in other recent In-depth interviews [1], data is a critical asset in banking and related activities, so Christopher’s role is a pivotal one. I’m very glad that he has been able to find time in his busy calendar to speak to us.
 Hello Christopher, can you start by providing readers with a flavour of your career to date and perhaps also explain why you came to focus on the data arena. Sure, it’s probably right to say I didn’t start out here, data was not my original choice, and for anyone of a similar age to me, data wasn’t a choice, when I started out, in that respect it’s a “new segment”. I started out on a management development programme in a retail bank in the UK, after which I moved to be an operations manager in investment banking. As part of that time in my career, post Euro migration and Y2K (yes I am genuinely that old, I also remember Vinyl records and Betamax video!) [2] I was asked to help solve the data problem. What I recognised very quickly was this was an area with under-investment, that was totally central the focus of that time – STP (Straight Through Processing). Equally it provided me with much broader perspectives, connections to all parts of the organisation that I previously didn’t have and it was at that point, some 20 years ago, that I decided this was the thing for me! I have since run and driven transformation in Reference Data, Master Data, KYC [3], Customer Data, Data Warehousing and more recently Data Lakes and Analytics, constantly building experience and capability in the Data Governance, Quality and data services domains, both inside banks, as a consultant and as a vendor.
 I am trying to get a picture of the role and responsibilities of the typical CDO (not that there appears to be such a thing), so would you mind touching on the span of your work at ING? I know you have a strong background in Enterprise Data Management, how does the CDO role differ from this area? I guess that depends on how you determine the scope of Enterprise Data Management. However, in reality, the CDO role encompasses Enterprise Data Management, although generally speaking the EDM role includes responsibility for the day to day operations of the collection processes, which in my current role I don’t have. I have accountability for the governance and quality through those processes and for making the data available for downstream consumers, like Analytics, Risk, Finance and HR. My role encompasses being the business driver for the data platform that we are rolling out across the organisation and its success in terms of the data going onto the platform and the curation of that data in a governed state, depending on the consumer requirements. My role today boils down to 4 key objectives – data availability, data transparency, data quality and data control.
 I know that ING consists of many operating areas and has adopted a federated structure with respect to data matters. What are the strengths of this approach and how does it work on a day-to-day basis? This approach ensures that the CDO role (I have a number of CDOs functionally reporting to me) remains close to the business and the local entity it supports, it ensures that my management team is directly connected to the needs of the business locally, and that the local businesses have a direct connection to the global strategy. What I would say is that there is no “one size fits all” approach to the CDO organisation model. It depends on the company culture and structure and it needs to fit with the stated objectives of the role as designed. On a day to day basis, we are aligned with the business units and the functional units so we have CDOs in all of these areas. Additionally I have a direct set of reports who drive the standard solutions around tooling, governance, quality, data protection, Data Ethics, Metadata and data glossary and models.
 Helping organisations become “data-centric” is a key part of what you do. I often use this phrase myself; but was recently challenged to elucidate its meaning. What does a “data-centric” organisation look like to you? What sort of value does data-centricity release in your experience? Data centric is a cultural shift, in the structures of the past where we have technology people and process, we now have data that touches all three. You know if you have reached the right place when data becomes part of the decision making process across the organisation, when decisions are only made when data is presented to support it and this is of the requisite quality. This doesn’t mean all decisions require data, some decisions don’t have data and that’s where leaderships decisions can be made, but for those decisions that have good data to support them, these can be made easily and at a lower level in the organisation. Hence becoming data centric supports an agile organisation and servant / leadership principles, utilising data makes decisions faster and outcomes better.
 I am on record multiple times [4] stating that technology choices are much less important than other aspects of data work. However, it is hard to ignore the impact that Big Data and related technologies have had. A few years into the cycle of Big Data adoption, do you see the tools and approaches yielding the expected benefits? Should I revisit my technology-agnostic stance? I have also been on record multiple times saying that every data problem is a people problem in disguise. I still hold that this is true today although potentially this is changing. The problems of the past and still to this day originate with poor data stewardship, I saw it happening in front of my eyes last week in Heathrow when I purchased something in a well known electronics store. Because I have an overseas postcode the guy at the checkout put dummy data into all the fields to get through the process quickly and not impact my customer experience, I desperately wanted to stop him but also wanted to catch my plane. This is where the process efficiency impacts good data collection. If the software that supports the process isn’t flexible, the issue won’t be fixed without technology intervention, this is often true in data quality problems which have knock on effects to customers, which at the end of the day are why we are all here. This is a people problem (because who is taking responsibility here for fixing it, or educating that guy at the checkout) AND it’s a technology problem, caused by inflexible or badly implemented systems. However, in the future, with more focus on customer driven checkout, digital channels and better customer experience, better interface driven data controls and robotics and AI, it may become further nuanced. People are still involved, communication remains critical but we cannot ignore technology in the digital age. For a long time, data groups have struggled with getting access to good tools and technology, now this technology domain is growing daily, and the tools are improving all the time. What we can do now with data at a significantly lower cost than ever before is amazing, and continues to improve all the time. Hence ignoring technology can be costly when extending capabilities to your stakeholders and could be a serious mistake, however focusing only on technology and ignoring people, process, communication etc is also a serious mistake. Data Leaders have to be multi-disciplinary today, and be able to keep up with the pace of change.
 I have heard you talk about “data platforms”, what do you mean by this and how do these contrast with another perennial theme, that of data democratisation? How does a “data platform” relate to – say – Data Science teams? Data democratisation is enabled by the data platform. The data platform is the technology enablement of the four pillars I mentioned before, availability, transparency, quality and control. The platform is a collection of technologies that standardise the approach and access to well governed data across the organisation. Data Democratisation is simply making data available and abstracting away from siloed storage mechanisms, but the platform wraps the implementation of quality, controls and structure to the way that happens. Data Science teams then get the data they need, including data curation services to find the data they need quickly, for governed and structured data, Data Science teams can utilise the glossary to identify what they need and understand the level of quality based on consumer views, they also have access to metadata in standard forms. This empowers the analytics capability to move faster, spend less time on data discovery and curation, structure and quality and more time on building analytics.
 I mentioned the federated CDO team at ING above and assume this is reflected in the rest of the organisation structure. ING also has customers in 40 countries and I know first-hand that a global footprint adds complexity. What are the challenges in being a CDO in such an environment? Does this put a higher premium on influencing skills for example? I am not sure it puts a higher premium on influencing skills, these have a high premium in any CDO role, even if you don’t have a federated structure, the reality is if you are in a data role you have more stakeholders than anyone else in the company, so influencing skills remain premium. A global footprint means complexity for sure, it means differences in a world where you are trying to standardise and it means you have to be tuned in to cultural differences and boundaries. It also means a great deal of variety, opportunities to learn new cultures and approaches, it means you have to listen and understand and flex your style and it means pragmatism plays an important part in your decision making process. At ING we have an amazing team of people who collaborate in a way I have never experienced before, supported by a strong attachment and commitment to the success of the business and our customers. This makes dealing with the complexity a team effort, with great energy and a fantastic working environment. In an organisation without the drive and passion we have here it would present challenges, with the support of the board and being a core part of the overall strategy, it ensures broad alignment to the goal, which makes the challenge easier for the organisation to solve, not easy, but easier and more fun.
 Building on the last point, every CDO I have interviewed has stressed the importance of relationships; something that chimes with my own experience. How do you go about building strong relationships and maintaining them when inevitable differences of opinion or clashes in interests arise? I touched on this a little earlier. Pragmatism over purism. I see purist everywhere in data, with views that are so rigid that the execution of them is doomed because purism doesn’t build relationships. Relationships are built based on what you bring and give up, on what you can give, not on what you can get. I try every day to achieve this, but I am human too, so I don’t always get it right, I hope I get it right more than I get it wrong and where I get it wrong I hope I can be forgiven for my intention is pure. We owe it to our customers to work together for their benefit, where we have differences the customer outcomes should drive our decisions, in that we have a common goal. Disagreements can be helped and supported by identifying a common goal, this starts to align people behind a common outcome. Individual interests can be put aside in preference of the customer interest.
 I know that you are very interested in data ethics and feel that this is an important area for CDOs to consider. Can you tell the readers a bit more about data ethics and why they should be central to an organisation’s approach to data? In an increasingly digital world, the use of data is becoming widespread and the pace at which it is used is increasing daily, our compute power grows exponentially as does the availability of data. Given this, we need an ethical framework to help us make good decisions with our customers and stakeholders in mind. How do you ensure that decisions in your organisation about how you use data are ethical? What are ethical decisions in your organisation and what are the guiding principles? If this isn’t clear and communicated to help all staff make good decisions, or have good discussions there is a real danger that decisions may not be properly socialised before all angles are considered. Just meeting the bar of privacy regulation may not be enough, you can still meet that bar and do things that your customers may disagree with of find “creepy” so the correct thought needs to be applied and the organisation engaged to ensure the correct conversations take place, and there is a place to go to discuss ethics. I am not saying that there is a silver bullet to solve this problem, but the conversation and the ability to have the conversation in a structured way helps the organisation understand its approach and make good decisions in this respect. That’s why CDOs should consider this an important part of the role and a critical engagement with users of data across the organisation.
 Finally, I have worked for businesses with a presence in the Netherlands on a number of occasions. As a Brit living abroad, how have you found Amsterdam. What – if any – adaptations have you had to make to your style to thrive in a somewhat different culture? Having lived in India, I thought my move to the Netherlands could only be easy. I arrived thinking that a 45 minute flight could not possibly provide as many challenges as an 11 hour flight, especially from a cultural perspective. Of course I was wrong because any move to a different culture provides challenges you could never have expected and it’s the small adjustments that take you by surprise the most. It’s always a hugely enjoyable learning experience though. London is a more top down culture whereas in the Netherlands it’s a much flatter approach, my experience here is positive although it does require an adjustment. I work in Amsterdam but live in a small village, chosen deliberately to integrate faster. It’s harder, more of a challenge but helps you understand the culture as you make friends with local people and get closer to the culture. My wife and I have never been a fan of the expat scene, we prefer to integrate, however more difficult this feels at first, it’s worth it in the long run. I must admit though that I haven’t conquered the language yet, it’s a real work in progress!
 Christopher, I really enjoyed our chat, which I believe will also be of great interest to readers. Thank you.

Christoper Bannocks can be reached at via his LinkedIn profile.

Disclosure: At the time of publication, neither peterjamesthomas.com Ltd. nor any of its Directors had any shared commercial interests with Christopher Bannocks, ING or any entities associated with either of these.

 If you are a Chief Data Officer, a Chief Analytics Officer, a Director of Data, or hold some other “Top Data Job” and would like to share your thoughts with the readers of this site in an interview like this one, please get in contact.

Notes

 [1] Specifically: An in-depth interview with experienced Chief Data Officer Roberto Maranca – Previously of Lloyds Banking Group   In-depth with CDO Jo Coutuer – Of BNP Paribas Fortis [2] So does the interviewer. [3] Know your customer. [4] Most directly in: A bad workman blames his [Business Intelligence] tools

From: peterjamesthomas.com, home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases