Nucleosynthesis and Data Visualisation

Nucleosynthesis-based Periodic Table
© Jennifer Johnson, Sloan Digital Sky Survey, http://www.sdss.org/ (Click to view a larger size)

The Periodic Table, is one of the truly iconic scientific images [1], albeit one with a variety of forms. In the picture above, the normal Periodic Table has been repurposed in a novel manner to illuminate a different field of scientific enquiry. This version was created by Professor Jennifer Johnson (@jajohnson51) of The Ohio State University and the Sloan Digital Sky Survey (SDSS). It comes from an article on the SDSS blog entitled Origin of the Elements in the Solar System; I’d recommend reading the original post.
 
 
The historical perspective

Modern Periodic Table (borrowed from Wikipedia)

A modern rendering of the Periodic Table appears above. It probably is superfluous to mention, but the Periodic Table is a visualisation of an underlying principle about elements; that they fall into families with similar properties and that – if appropriately arranged – patterns emerge with family members appearing at regular intervals. Thus the Alkali Metals [2], all of which share many important characteristics, form a column on the left-hand extremity of the above Table; the Noble Gases [3] form a column on the far right; and, in between, other families form further columns.

Given that the underlying principle driving the organisation of the Periodic Table is essentially a numeric one, we can readily see that it is not just a visualisation, but a data visualisation. This means that Professor Johnson and her colleagues are using an existing data visualisation to convey new information, a valuable technique to have in your arsenal.

Mendeleev and his original periodic table (borrowed from Wikipedia)

One of the original forms of the Periodic Table appears above, alongside its inventor, Dmitri Mendeleev.

As with most things in science [4], my beguilingly straightforward formulation of “its inventor” is rather less clear-cut in practice. Mendeleev’s work – like Newton’s before him – rested “on the shoulders of giants” [5]. However, as with many areas of scientific endeavour, the chain of contributions winds its way back a long way and specifically to one of the greatest exponents of the scientific method [6], Antoine Lavoisier. The later Law of Triads [7], was another significant step along the path and – to mix a metaphor – many other scientists provided pieces of the jigsaw puzzle that Mendeleev finally assembled. Indeed around the same time as Mendeleev published his ideas [8], so did the much less celebrated Julius Meyer; Meyer and Mendeleev’s work shared several characteristics.

The epithet of inventor attached to Mendeleev for two main reasons: his leaving of gaps in his table, pointing the way to as yet undiscovered elements; and his ordering of table entries according to family behaviour rather than atomic mass [9]. None of this is to take away from Mendeleev’s seminal work, it is wholly appropriate that his name will always be linked with his most famous insight. Instead it is my intention is to demonstrate that the the course of true science never did run smooth [10].
 
 
The Johnson perspective

Professor Jennifer Johnson

Since its creation – and during its many reformulations – the Periodic Table has acted as a pointer for many areas of scientific enquiry. Why do elements fall into families in this way? How many elements are there? Is it possible to achieve the Alchemists’ dream and transmute one element into another? However, the question which Professor Johnson’s diagram addresses is another one, Why is there such an abundance of elements and where did they all come from?

The term nucleosynthesis that appears in the title of this article covers processes by which different atoms are formed from either base nucleons (protons and neutrons) or the combination of smaller atoms. It is nucleosynthesis which attempts to answer the question we are now considering. There are different types.

The Big Bang (borrowed from NASA)

Our current perspective on where everything in the observable Universe came from is of course the Big Bang [11]. This rather tidily accounts for the abundance of element 1, Hydrogen, and much of that of element 2, Helium. This is our first type of nucleosynthesis, Big Bang nucleosynthesis. However, it does not explain where all of the heavier elements came from [12]. The first part of the answer is from processes of nuclear fusion in stars. The most prevalent form of this is the fusion of Hydrogen to form Helium (accounting for the remaining Helium atoms), but this process continues creating heavier elements, albeit in ever decreasing quantities. This is stellar nucleosynthesis and refers to those elements created in stars during their normal lives.

While readers may be ready to accept the creation of these heavier elements in stars, an obvious question is How come they aren’t in stars any longer? The answer lies in what happens at the end of the life of a star. This is something that depends on a number of factors, but particularly its mass and also whether or not it is associated with another star, e.g. in a binary system.

A canonical binary system (borrowed from Disney)

Broadly speaking, higher mass stars tend to go out with a bang [13], lower mass ones with various kinds of whimpers. The exception to the latter is where the low mass star is coupled to another star, arrangements which can also lead to a considerable explosion as well [14]. Of whatever type, violent or passive, star deaths create all of the rest of the heavier elements. Supernovae are also responsible for releasing many heavy elements in to interstellar space, and this process is tagged explosive nucleosynthesis.

The aftermath of a supernova (borrowed from NASA again)

Into this relatively tidy model of nucleosynthesis intrudes the phenomenon of cosmic ray fission, by which cosmic rays [15] impact on heavier elements causing them to split into smaller constituents. We believe that this process is behind most of the Beryllium and Boron in the Universe as well as some of the Lithium. There are obviously other mechanisms at work like radioactive decay, but the vast majority of elements are created either in stars or during the death of stars.

I have elided many of the details of nucleosynthesis here, it is a complicated and evolving field. What Professor Johnson’s graphic achieves is to reflect current academic thinking around which elements are produced by which type of process. The diagram certainly highlights the fact that the genesis of the elements is a complex story. Perhaps less prosaically, it also encapulates Carl Sagan‘s famous aphorism, the one that Professor Johnson quotes at the beginning of her article and which I will use to close mine.

We are made of starstuff.


 Notes

 
[1]
 
See Data Visualisation – A Scientific Treatment for a perspective on another member of this select group.
 
[2]
 
Lithium, Sodium, Potassium, Rubidium, Caesium and Francium (Hydrogen sometimes is shown as topping this list as well).
 
[3]
 
Helium, Argon, Neon, Krypton, Xenon and Radon.
 
[4]
 
Watch this space for an article pertinent to this very subject.
 
[5]
 
Isaac Newton on 15th February 1676. in a letter to Robert Hooke; but employing a turn of phrase which had been in use for many years.
 
[6]
 
And certainly the greatest scientist ever to be beheaded.
 
[7]
 
Döbereiner, J. W. (1829) “An Attempt to Group Elementary Substances according to Their Analogies”. Annalen der Physik und Chemie.
 
[8]
 
In truth somewhat earlier.
 
[9]
 
The emergence of atomic number as the organising principle behind the ordering of elements happened somewhat later, vindicating Mendeleev’s approach.

We have:

atomic mass ≅ number of protons in the nucleus of an element + number of neutrons

whereas:

atomic number = number of protons only

The number of neutrons can jump about between successive elements meaning that arranging them in order of atomic mass gives a different result from atomic number.

 
[10]
 
With apologies to The Bard.
 
[11]
 
I really can’t conceive that anyone who has read this far needs the Big Bang further expounded to them, but if so, then GIYF.
 
[12]
 
We think that the Big Bang also created some quantities of Lithium and several other heavier elements, as covered in Professor Johnson’s diagram.
 
[13]
 
Generally some type of Core Collapse supernova.
 
[14]
 
Type-Ia supernovae are a phenomenon that allow us to accurately measure the size of the universe and how this is changing.
 
[15]
 
Cosmic rays are very high energy particles that originate from outside of the Solar System and consist mostly of very fast moving protons (aka Hydrogen nuclei) and other atomic nuclei similarly stripped of their electrons.

 

 

Metamorphosis

Metamorphosis

No neither my observations on the work of Kafka, nor that of Escher [1]. Instead some musings relating on how to transform a bare bones and unengaging chart into something that both captures the attention of the reader and better informs them of the message that the data displayed is relaying. Let’s consider an example:

Before:

Before

After:

After

The two images above are both renderings of the same dataset, which tracks the degree of fragmentation of the Israeli parliament – the Knesset – over time [2]. They are clearly rather different and – I would argue – the latter makes it a lot easier to absorb information and thus to draw inferences.

Boris Gorelik

Both are the work of Boris Gorelik a data scientist at Automattic, a company that is most well-known for creating freemium SAAS blogging platform, WordPress.com and open source blogging software, WordPress [3].

Data for breakfast

I have been a contented WordPress.com user since the inception of this blog back in November 2008, so it was with interest that I learnt that Automattic have their own data-focussed blog, Data for Breakfast, unsurprisingly hosted on WordPress.com. It was on Data for Breakfast that I found Boris’s article, Evolution of a Plot: Better Data Visualization, One Step at a Time. In this he takes the reader step by step through what he did to transform his data visualisation from the ugly duckling “before” exhibit to the beautiful swan “after” exhibit.

Boris is using Python and various related libraries to do his data visualisation work. Given that I stopped commercially programming sometime around 2009 (admittedly with a few lapses since), I typically use the much more quotidian Excel for most of the charts that appear on peterjamesthomas.com [4]. Sometimes, where warranted, I enhance these using Visio and / or PaintShop Pro.

For example, the three [5] visualisations featured in A Tale of Two [Brexit] Data Visualisations were produced this way. Despite the use of Calibri, which is probably something of a giveaway, I hope that none of these resembles a straight-out-of-the-box Excel graph [6].
 

Brexit Bar
UK Referendum on EU Membership – Percentage voting by age bracket (see notes)

 
Brexit Bar 2
UK Referendum on EU Membership – Numbers voting by age bracket (see notes)

 
Brexit Flag
UK Referendum on EU Membership – Number voting by age bracket (see notes)

 
While, in the above, I have not gone to the lengths that Boris has in transforming his initial and raw chart into something much more readable, I do my best to make my Excel charts look at least semi-professional. My reasoning is that, when the author of a chart has clearly put some effort into what their chart looks like and has at least attempted to consider how it will be read by people, then this is a strong signal that the subject matter merits some closer consideration.

Next time I develop a chart for posting on these pages, I may take Boris’s lead and also publish how I went about creating it.
 


 Notes

 
[1]
 
Though the latter’s work has adorned these pages on several occasions and indeed appears in my seminar decks.
 
[2]
 
Boris has charted a metric derived from how many parties there have been and how many representatives of each. See his article itself for further background.
 
[3]
 
You can learn more about the latter at WordPress.org.
 
[4]
 
Though I have also used GraphPad Prism for producing more scientific charts such as the main one featured in Data Visualisation – A Scientific Treatment.
 
[5]
 
Yes I can count. I have certificates which prove this.
 
[6]
 
Indeed the final one was designed to resemble a fractured British flag. I’ll leave readers to draw their own conclusions here.

 

 

How Age was a Critical Factor in Brexit

Brexit ages infographic
Click to download a larger PDF version in a new window.

In my last article, I looked at a couple of ways to visualise the outcome of the recent UK Referendum on Europen Union membership. There I was looking at how different visual representations highlight different attributes of data.

I’ve had a lot of positive feedback about my previous Brexit exhibits and I thought that I’d capture the zeitgeist by offering a further visual perspective, perhaps one more youthful than the venerable pie chart; namely an infographic. My attempt to produce one of these appears above and a full-size PDF version is also just a click away.

For caveats on the provenance of the data, please also see the previous article’s notes section.
 

Addendum

I have leveraged age group distributions from the Ascroft Polling organisation to create this exhibits. Other sites – notably the BBC – have done the same and my figures reconcile to the interpretations in other places. However, based on further analysis, I have some reason to think that either there are issues with the Ashcroft data, or that I have leveraged it in ways that the people who compiled it did not intend. Either way, the Ashcroft numbers lead to the conclusion that close to 100% of 55-64 year olds voted in the UK Referendum, which seems very, very unlikely. I have contacted the Ashcroft Polling organisation about this and will post any reply that I receive.

– Peter James Thomas, 14th July 2016

 

A Tale of Two [Brexit] Data Visualisations

Rip, or is that RIP? With apologies to The Economist and acknowledgement to David Bacon.

I’m continuing with the politics and data visualisation theme established in my last post. However, I’ll state up front that this is not a political article. I have assiduously stayed silent [on this blog at least] on the topic of my country’s future direction, both in the lead up to the 23rd June poll and in its aftermath. Instead, I’m going to restrict myself to making a point about data visualisation; both how it can inform and how it can mislead.

Brexit Bar
UK Referendum on EU Membership – Percentage voting by age bracket (see notes)

The exhibit above is my version of one that has appeared in various publications post referendum, both on-line and print. As is referenced, its two primary sources are the UK Electoral Commission and Lord Ashcroft’s polling organisation. The reason why there are two sources rather than one is explained in the notes section below.

With the caveats explained below, the above chart shows the generational divide apparent in the UK Referendum results. Those under 35 years old voted heavily for the UK to remain in the EU; those with ages between 35 and 44 voted to stay in pretty much exactly the proportion that the country as a whole voted to leave; and those over 45 years old voted increasingly heavily to leave as their years advanced.

One thing which is helpful about this exhibit is that it shows in what proportion each cohort voted. This means that the type of inferences I made in the previous paragraph leap off the page. It is pretty clear (visually) that there is a massive difference between how those aged 18-24 and those aged 65+ thought about the question in front of them in the polling booth. However, while the percentage based approach illuminates some things, it masks others. A cursory examination of the chart above might lead one to ask – based on the area covered by red rectangles – how it was that the Leave camp prevailed? To pursue an answer to this question, let’s consider the data with a slightly tweaked version of the same visualisation as below:

Brexit Bar 2
UK Referendum on EU Membership – Numbers voting by age bracket (see notes)

[Aside: The eagle-eyed amongst you may notice a discrepancy between the figures shown on the total bars above and the actual votes cast, which were respectively: Remain: 16,141k and Leave: 17,411k. Again see the notes section for an explanation of this.]

A shift from percentages to actual votes recorded casts some light on the overall picture. It now becomes clear that, while a large majority of 18-24 year olds voted to Remain, not many people in this category actually voted. Indeed while, according to the 2011 UK Census, the 18-24 year category makes up just under 12% of all people over 18 years old (not all of whom would necessarily be either eligible or registered to vote) the Ashcroft figures suggest that well under half of this group cast their ballot, compared to much higher turnouts for older voters (once more see the notes section for caveats).

This observation rather blunts the assertion that the old voted in ways that potentially disadvantaged the young; the young had every opportunity to make their voice heard more clearly, but didn’t take it. Reasons for this youthful disengagement from the political process are of course beyond the scope of this article.

However it is still hard (at least for the author’s eyes) to get the full picture from the second chart. In order to get a more visceral feeling for the dynamics of the vote, I have turned to the much maligned pie chart. I also chose to use the even less loved “exploded” version of this.

Brexit Flag
UK Referendum on EU Membership – Number voting by age bracket (see notes)

Here the weight of both the 65+ and 55+ Leave vote stands out as does the paucity of the overall 18-24 contribution; the only two pie slices too small to accommodate an internal data label. This exhibit immediately shows where the referendum was won and lost in a way that is not as easy to glean from a bar chart.

While I selected an exploded pie chart primarily for reasons of clarity, perhaps the fact that the resulting final exhibit brings to mind a shattered and reassembled Union Flag was also an artistic choice. Unfortunately, it seems that this resemblance has a high likelihood of proving all too prophetic in the coming months and years.
 

Addendum

I have leveraged age group distributions from the Ascroft Polling organisation to create these exhibits. Other sites – notably the BBC – have done the same and my figures reconcile to the interpretations in other places. However, based on further analysis, I have some reason to think that either there are issues with the Ashcroft data, or that I have leveraged it in ways that the people who compiled it did not intend. Either way, the Ashcroft numbers lead to the conclusion that close to 100% of 55-64 year olds voted in the UK Referendum, which seems very, very unlikely. I have contacted the Ashcroft Polling organisation about this and will post any reply that I receive.

– Peter James Thomas, 14th July 2016

 



 
 
Notes

Caveat: I am neither a professional political pollster, nor a statistician. Instead I’m a Pure Mathematician, with a basic understanding of some elements of both these areas. For this reason, the following commentary may not be 100% rigorous; however my hope is that it is nevertheless informative.

In the wake of the UK Referendum on EU membership, a lot of attempts were made to explain the result. Several of these used splits of the vote by demographic attributes to buttress the arguments that they were making. All of the exhibits in this article use age bands, one type of demographic indicator. Analyses posted elsewhere looked at things like the influence of the UK’s social grade classifications (A, B, C1 etc.) on voting patterns, the number of immigrants in a given part of the country, the relative prosperity of different areas and how this has changed over time. Other typical demographic dimensions might include gender, educational achievement or ethnicity.

However, no demographic information was captured as part of the UK referendum process. There is no central system which takes a unique voting ID and allocates attributes to it, allowing demographic dicing and slicing (to be sure a partial and optional version of this is carried out when people leave polling stations after a General Election, but this was not done during the recent referendum).

So, how do so many demographic analyses suddenly appear? To offer some sort of answer here, I’ll take you through how I built the data set behind the exhibits in this article. At the beginning I mentioned that I relied on two data sources, the actual election results published by the UK Electoral Commission and the results of polling carried out by Lord Ashcroft’s organisation. The latter covered interviews with 12,369 people selected to match what was anticipated to be the demographic characteristics of the actual people voting. As with most statistical work, properly selecting a sample with no inherent biases (e.g. one with the same proportion of people who are 65 years or older as in the wider electorate) is generally the key to accuracy of outcome.

Importantly demographic information is known about the sample (which may also be reweighted based on interview feedback) and it is by assuming that what holds true for the sample also holds true for the electorate that my charts are created. So if X% of 18-24 year olds in the sample voted Remain, the assumption is that X% of the total number of 18-24 year olds that voted will have done the same.

12,000 plus is a good sample size for this type of exercise and I have no reason to believe that Lord Ashcroft’s people were anything other than professional in selecting the sample members and adjusting their models accordingly. However this is not the same as having definitive information about everyone who voted. So every exhibit you see relating to the age of referendum voters, or their gender, or social classification is based on estimates. This is a fact that seldom seems to be emphasised by news organisations.

The size of Lord Ashchoft’s sample also explains why the total figures for Leave and Remain on my second exhibit are different to the voting numbers. This is because 5,949 / 12,369 = 48.096% (looking at the sample figures for Remain) whereas 16,141,241 / 33,551,983 = 48.108% (looking at the actual voting figures for Remain). Both figures round to 48.1%, but the small difference in the decimal expansions, when applied to 33 million people, yields a slightly different result.

 

Showing uncertainty in a Data Visualisation

Click to view a larger version

My attention was drawn to the above exhibit by a colleague. It is from the FiveThirtyEight web-site and one of several exhibits included in an analysis of the standing of the two US Presidential hopefuls.

In my earlier piece, Data Visualisation – A Scientific Treatment, I argued for more transparency in showing the inherent variability associated with the numbers spat out by statistical models. My specific call back then was for the use of error bars.

The FiveThirtyEight exhibit deals with this same challenge in a manner which I find elegant, clean and eminently digestible. It contains many different elements of information, but remains an exhibit whose meaning is easy to absorb. It’s an approach I will probably look to leverage myself next time I have a similar need.

 

Forming an Information Strategy: Part II – Situational Analysis

Forming an Information Strategy
I – General Strategy II – Situational Analysis III – Completing the Strategy

Maybe we could do with some better information, but how to go about getting it? Hmm...

This article is the second of three which address how to formulate an Information Strategy. I have written a number of other articles which touch on this subject[1] and have also spoken about the topic[2]. However I realised that I had never posted an in-depth review of this important area. This series of articles seeks to remedy this omission.

The first article, Part I – General Strategy, explored the nature of strategy, laid some foundations and presented a framework of questions which will need to be answered in order to formulate any general strategy. This chapter, Part II – Situational Analysis, explains how to adapt the first element of this general framework – The Situational Analysis – to creating an Information Strategy. In Part I, I likened formulating an Information Strategy to a journey, Part III – Completing the Strategy sees us reaching the destination by working through the rest of the general framework and showing how this can be used to produce a fully-formed Information Strategy.

As with all of my other articles, this essay is not intended as a recipe for success, a set of instructions which – if slavishly followed – will guarantee the desired outcome. Instead the reader is invited to view the following as a set of observations based on what I have learnt during a career in which the development of both Information Strategies and technology strategies in general have played a major role.
 
 
A Recap of the Strategic Framework

LCP

I closed Part I of this series by presenting a set of questions, the answers to which will facilitate the formation of any strategy. These have a geographic / journey theme and are as follows:

  1. Where are we?
  2. Where do we want to be instead and why?
  3. How do we get there, how long will it take and what will it cost?
  4. Will the trip be worth it?
  5. What else can we do along the way?

In this article I will focus on how to answer the first question, Where are we? This is the province of a Situational Analysis. I will now move on from general strategy and begin to be specific about how to develop a Situational Analysis in the context of an overall Information Strategy.

But first a caveat: if the last article was prose-heavy, this one is question-heavy; the reader is warned!
 
 
Where are we? The anatomy of an Information Strategy’s Situational Analysis

The unfashionable end of the western spiral arm of the Galaxy

If we take this question and, instead of aiming to plot our celestial coordinates, look to consider what it would mean in the context of an Information Strategy, then a number of further questions arise. Here are just a few examples of the types of questions that the strategist should investigate, broken down into five areas:

Business-focussed questions

  • What do business people use current information to do?
  • In their opinion, is current information adequate for this task and if not in what ways is it inadequate?
  • Are there any things that business people would like to do with information, but where the figures don’t exist or are not easily accessible?
  • How reliable and trusted is existing information, is it complete, accurate and suitably up-to-date?
  • If there are gaps in information provision, what are these and what is the impact of missing data?
  • How consistent is information provision, are business entities and calculated figures ambiguously labeled and can you get different answers to the same question in different places?
  • Is existing information available at the level that different people need, e.g. by department, country, customer, team or at at a transactional level?
  • Are there areas where business people believe that data is available, but no facilities exist to access this?
  • What is the extent of End User Computing, is this at an appropriate level and, if not, is poor information provision a driver for work in this area?
  • Related to this, are the needs of analytical staff catered for, or are information facilities targeted mostly at management reporting only?
  • How easy do business people view it as being to get changes made to information facilities, or to get access to the sort of ad hoc data sets necessary to support many business processes?
  • What training have business people received, what is the general level of awareness of existing information facilities and how easy is it for people to find what they need?
  • How intuitive are existing information facilities and how well-structured are menus which provide access to these?
  • Is current information provision something that is an indispensable part of getting work done, or at best an afterthought?

Design questions

  • How were existing information facilities created, who designed and built them and what level of business input was involved?
  • What are the key technical design components of the overall information architecture and how do they relate to each other?
  • If there is more than one existing information architecture (e.g. in different geographic locations or different business units), what are the differences between them?
  • How many different tools are used in various layers of the information architecture? E.g.
    • Databases
    • Extract Transform Load tools
    • Multidimensional data stores
    • Reporting and Analysis tools
    • Data Visualisation tools
    • Dashboard tools
    • Tools to provide information to applications or web-portals
  • What has been the role of data modeling in designing and developing information facilities?
  • If there is a target data model for the information facilities, is this fit for purpose and does it match business needs?
  • Has a business glossary been developed in parallel to the design of the information capabilities and if so is this linked to reporting layers?
  • What is the approach to master data and how is this working?

Technical questions

  • What are the key source systems and what are their types, are these integrated with each other in any way?
  • How does data flow between source systems?
  • Is there redundancy of data and can similar datasets in different systems get out of synch with each other, if so which are the master records?
  • How robust are information facilities, do they suffer outages, if so how often and what are the causes?
  • Are any issues experienced in making changes to information facilities, either extended development time, or post-implementation failures?
  • Are there similar issues related to the time taken to fix information facilities when they go wrong?
  • Are various development tools integrated with each other in a way that helps developers and makes code more rigourous?
  • How are errors in input data handled and how robust are information facilities in the face of these challenges?
  • How well-optimised is the regular conversion of data into information?
  • How well do information facilities cope with changes to business entities (e.g. the merger of two customers)?
  • Is the IT infrastructure(s) underpinning information facilities suitable for current data volumes, what about future data volumes?
  • Is there a need for redundancy in the IT infrastructure supporting information facilities, if so, how is this delivered?
  • Are suitable arrangements in place for disaster recovery?

Process questions

  • Is there an overall development methodology applied to the creation of information facilities?[3]
  • If so, is it adhered to and is it fit for purpose?
  • What controls are applied to the development of new code and data structures?
  • How are requests for new facilities estimated and prioritised?
  • How do business requirements get translated into what developers actually do and is this process working?
  • Is the level, content and completeness of documentation suitable, is it up-to-date and readily accessible to all team members?
  • What is the approach to testing new information facilities?
  • Are there any formal arrangements for Data Governance and any initiatives to drive improvements in data quality?
  • How are day-to-day support and operational matters dealt with and by whom?

Information Team questions

  • Is there a single Information Team or many, if many, how do they collaborate and share best practice?
  • What is the demand for work required of the existing team(s) and how does this relate to their capacity for delivery?
  • What are the skills of current team members and how do these complement each other?
  • Are there any obvious skill gaps or important missing roles?
  • How do information people relate to other parts of IT and to their business colleagues?
  • How is the information team(s) viewed by their stakeholders in terms of capability, knowledge and attitude?

 
An Approach to Geolocation

It's good to talk. I was going to go with a picture of the late Bob Hoskins, but figured that this might not resonate outside of my native UK.

So that’s a long list of questions[4], to add to the list: what is the best way of answering them? Of course it may be that there is existing documentation which can help in some areas, however the majority of questions are going to be answered via the expedient of talking to people. While this may appear to be a simple approach, if these discussions are going to result in an accurate and relevant Situational Analysis, then how to proceed needs to be thought about up-front and work needs to be properly structured.

Business conversations

A challenge here is the range and number of people[5]. It is of course crucial to start with the people who consume information. These discussions would ideally allow the strategist to get a feeling for what different business people do and how they do it. This would cover their products/services, the markets that they operate in and the competitive landscape they face. With some idea of these matters established, the next item is their needs for information and how well these are met at present. Together feedback in these areas will begin to help to shape answers to some of the business-focussed questions referenced above (and to provide pointers to guide investigations in other areas). However it is not as simple an equation as:

Talk to Business People = Answer all Business-focussed Questions

The feedback from different people will not be identical, variations may be driven by their personal experience, how long they have been at the company and what part of its operations they work in. Different people will also approach their work in different ways, some will want to be very numerically focussed in decision-making, others will rely more on experience and relationships. Also even getting information out of people in the first place is a skill in itself; it is a capital mistake for even the best analyst to theorise before they have data[6].

This heterogeneity means that one challenge in writing the business-focussed component of a Situational Analysis within an overall Information Strategy is sifting through the different feedback looking for items which people agree upon, or patterns in what people said and the frequency with which different people made similar points. This work is non-trivial and there is no real substitute for experience. However, one thing that I would suggest can help is to formally document discussions with business people. This has a number of advantages, such as being able to run this past them to check the accuracy and completeness of your notes[7] and being able to defend any findings as based on actual fact. However, documenting meetings also facilitates the analysis and synthesis process described above. These meeting notes can be read and re-read (or shared between a number of people collectively engaged in the strategy formulation process) and – when draft findings have been developed – these can be compared to the original source material to ensure consistency and completeness.

IT conversations

I preferred Father Ted (or the first series of Black Books) myself; can't think where the inspiration for these characters came from.

Depending on circumstances, talking to business people can often be the largest activity and will do most to formulate proposals that will appear in other parts of the Information Strategy. However the other types of questions also need to be considered and parallel discussions with general IT people are a prerequisite. An objective here is for the strategist to understand (and perhaps document) the overall IT landscape and how this flows into current information capabilities. Such a review can also help to identify mismatches between business aspirations and system capabilities; there may be a desire to report on data which is captured nowhere in the organisation for example.

The final tranche of discussions need to be with the information professionals who have built the current information landscape (assuming that they are still at the company, if not then the people to target are those who maintain information facilities). There can sometimes be an element of defensiveness to be overcome in such discussions, but equally no one will have a better idea about the challenges with existing information provision than the people who deal with this area day in and day out. It is worth taking the time to understand their thoughts and opinions. With both of these groups of IT people, formally documented notes and/or schematics are just as valuable as with the business people and for the same reasons.

Rinse and Repeat

The above conversations have been described sequentially, but some element of them will probably be in parallel. Equally the process is likely to be somewhat iterative. It is perhaps a good idea to meet with a subset of business people first, draw some very preliminary conclusions from these discussions and then hold some initial meetings with various IT people to both gather more information and potentially kick the tyres on your embryonic findings. Sometimes after having done a lot of business interviews, it is also worth circling back to the first cohort both to ask some different questions based on later feedback and also to validate the findings which you are hopefully beginning to refine by now.

Of course a danger here is that you could spend an essentially limitless time engaging with people and not ever landing your Situational Analysis; in particular person A may suggest what a good idea it would be for you to also meet with person B and person C (and so on exponentially). The best way to guard against this is time-boxing. Give your self a deadline, perhaps arrange for a presentation of an initial Situational Analysis to an audience at a point in the not-so-distance future. This will help to focus your efforts. Of course mentioning a presentation, or at least some sort of abridged Situational Analysis, brings up the idea of how to summarise the detailed information that you have uncovered through the process described above. This is the subject of the final section of this article.
 
 
In Summary

Sigma

I will talk further about how to summarise findings and recommendations in Part III, for now I wanted to focus on just two aspects of this. First a mechanism to begin to identify areas of concern and second a simple visual way to present the key elements of an information-focussed Situational Analysis in a relatively simple exhibit.

Sorting the wheat from the chaff

To an extent, sifting through large amounts of feedback from a number of people is one way in which good IT professionals earn their money. Again experience is the most valuable tool to apply in this situation. However, I would suggest some intermediate steps would also be useful here both to the novice and the seasoned professional. If you have extensive primary material from your discussions with a variety of people and have begun to discern some common themes through this process, then – rather than trying to progress immediately to an overall summary – I would recommend writing notes around each of these common themes as a good place to start. These notes may be only for your own purposes, or they may be something that you also later choose to circulate as additional information; if you take the latter approach, then bear the eventual audience in mind while writing. Probably while you are composing these intermediate-level notes a number of things will happen. First it may occur to you that some sections could be split to more precisely target the issues. Equally other sections may overlap somewhat and could benefit from being merged. Also you may come to realise that you have overlooked some areas and need to address these.

Whatever else is happening, this approach is likely to give your subconscious some time to chew over the material in parallel. It is for this reason that sometimes the strategist will wake at night with an insight that had previously eluded them. Whether or not the subconscious contributes this dramatically, this rather messy and organic process will leave you with a number of paragraphs (or maybe pages) on a handful of themes. This can then form the basis of the more summary exhibit which I describe in the next section; namely a scorecard.

An Information Provision Scorecard

Information provision scorecard (click to view a larger version in a new tab)

Of course a scorecard about the state of information provision approaches levels of self-reference that Douglas R Hofstadter[8] would be proud of. I would suggest that such a scorecard could be devised by thinking about each of the common themes that have arisen, considering each of the areas of questioning described above (business, design, technical, process and team), or perhaps a combination of both. The example scorecard which I provide above uses the areas of questions as its intermediate level. These are each split out into a number of sub-categories (these will vary from situation to situation and hence I have not attempted to provide actual sub-category names). A score can be allocated (based on your research) to each of these on some scale (the example uses a 5 point one) and these base figures can be rolled up to get a score for each of the intermediate categories. These can then be further summarised to give a single, overall score [9].

While a data visualisation such as the one presented here may be a good way to present overall findings, it is important that this can be tied back to the notes that have been compiled during the analysis. Sometimes such scores will be challenged and it is important that they are based in fact and can thus be defended.
 
 
Next steps

Next steps

Of course your scorecard, or overall Situational Analysis, could tell you that all is well. If this is the case, then our work here may be done[10]. If however the Situational Analysis reveals areas where improvements can be made, or if there is a desire to move the organisation forward in a way that requires changes to information provision, then thought must be given to either what can be done to remediate problems or what is necessary to seize opportunities; most often a mixture of both. Considering these questions will be the subject of the final article in this series, Forming an Information Strategy: Part III – Completing the Strategy.
 


 
Addendum

When I published the first part of this series, I received an interesting comment from Gary Nuttall, Head of Business Intelligence at Chaucer Syndicates (you can view Gary’s profile on LinkedIn and he posts as @gpn01 on Twitter). I reproduce an extract from this verbatim below:

[When considering questions such as “Where are we?”] one thing I’d add, which for smaller organisations may not be relevant, is to consider who the “we” is (are?). For a multinational it can be worth scoping out whether the strategy is for the legal entity or group of companies, does it include the ultimate parent, etc. It can also help in determining the culture of the enterprise too which will help to shape the size, depth and span of the strategy too – for some companies a two pager is more than enough for others a 200 pager would be considered more appropriate.

I think that this is a valuable additional perspective and I thank Gary for providing this insightful and helpful feedback.
 

Forming an Information Strategy
I – General Strategy II – Situational Analysis III – Completing the Strategy

 
Notes

 
[1]
 
These include (in chronological order):

 
[2]
 
IRM European Data Warehouse and Business Intelligence Conference
– November 2012
 
[3]
 
There are a whole raft of sub-questions here and I don’t propose to be exhaustive in this article.
 
[4]
 
In practice its at best a representative subset of the questions that would need to be answered to assemble a robust situational analysis.
 
[5]
 
To get some perspective on the potential range of business people it is necessary to engage in such a process, again see the aforementioned Developing an international BI strategy.
 
[6]
 
With apologies to Arthur Conan Doyle and his most famous creation.
 
[7]
 
It is not atypical for this approach to lead to people coming up with new observations based on reviewing your meeting notes. This is a happy outcome.
 
[8]
 
B.T.L. - An eternal golden braid (with apologies to Douglas R Hofstadter

Gödel, Escher, Bach: An Eternal Golden Braid has been referenced a number of times on this site (see above from New Adventures in Wi-Fi – Track 3: LinkedIn), but I think that this is the first time that I have explicitly acknowledged its influence.

 
[9]
 
You can try to be cute here and weight scores before rolling them up. In practice this is seldom helpful and can give the impression that the precision of scoring is higher than can ever actually be the case. Judgement also needs to be exercised in determining which graphic to use to best represent a rolled up score as these will seldom precisely equal the fractions selected; quarters in this example. The strategist should think about whether a rounded-up or rounded-down summary score is more representative of reality as pure arithmetic may not suffice in all cases.
 
[10]
 
There remains the possibility that the current situation is well-aligned with current business practices, but will have problems supporting future ones. In this case perhaps a situational analysis is less useful, unless this is comparing to some desired future state (of which more in the next chapter).

 

 

Scienceogram.org’s Infographic to celebrate the Philae landing

Scienceogram.org Rosetta Infographic - Click to view the original in a new tab

© scienceogram.org 2014
Reproduced on this site under a Creative Commons licence
Original post may be viewed here

As a picture is said to paint a thousand words, I’ll (mostly) leave it to Scienceogram’s infographic to deliver the message.

However, The Center for Responsive Politics (I have no idea whether or not they have a political affiliation, they claim to be nonpartisan) estimates the cost of the recent US Congressional elections at around $3.67 bn (€2.93 bn). I found a lower (but still rather astonishing) figure of $1.34 bn (€1.07 bn) at the Federal Election Commission web-site, but suspect that this number excludes Political Action Committees and their like.

To make a European comparisson to a European space project, the Common Agriculture Policy cost €57.5 bn ($72.0 bn) in 2013 according to the BBC. Given that Rosetta’s costs were spread over nearly 20 years, it makes sense to move the decimal point rightwards one place in both the euro and dollar figures and then to double the resulting numbers before making comparisons (this is left as an exercise for the reader).

Of course I am well aware that a quick Google could easily produce figures (such as how many meals, or vaccinations, or so on you could get for €1.4 bn) making points that are entirely antipodal to the ones presented. At the end of the day we landed on a comet and will – fingers crossed – begin to understand more about the formation of the Solar System and potentially Life on Earth itself as a result. Whether or not you think that is good value for money probably depends mostly on what sort of person you are. As I relate in a previous article, infographics only get you so far.
 


 
Scienceogram provides précis [correct plural] of UK science spending, giving overviews of how investment in science compares to the size of the problems it’s seeking to solve.