Ideas for avoiding Big Data failures and for dealing with them if they happen

Avoid failure

In August 2016, I read an article by Paul Barsch (@paul_a_barsch), who at the time was Teradata‘s Marketing Director for Big Data Consulting Services [1]. I have always had a lot of time for Paul’s thoughts; and of course anyone who features the Mandelbrot Set so prominently in his work deserves a certain amount of kudos.

Paul Barsch

The title of the article in question was Big Data Projects – When You’re Not Getting the ROI You Expect and the piece appeared on Paul’s personal blog, Just Like Davos. Something drew me back to this article recently, maybe some of the other writing I have done around Big Data [2], but most likely my recent review of areas in which Data Programmes can go wrong [3]. Whatever the reason, I also ended up taking a look at his earlier piece, 3 Big Data Potholes to Avoid (December 2015). This article leverages material from each of these two posts on Paul’s blog. As ever, I’d encourage readers to take a look at the source material.

I’ll kick off with some scare tactics borrowed from the earlier article (which – for good reasons – are also cited in the later one):

[According to Gartner] “Through 2017, 60% of big data projects will fail to go beyond piloting and experimentation and will be abandoned.”

As most people will be aware, rigorous studies have shown that 82% of statistics are made up on the spur of the moment [4], but 60% is still a scary number. Until that is you begin to think about the success rate of most things that people try. Indeed, I used to have the following stats as part of my deck that I used internally in the early years of this decade:

“Data warehouses play a crucial role in the success of an information program. However more than 50% of data warehouse projects will have limited acceptance, or will be outright failures”

– Gartner 2007

“60-70% of the time Enterprise Resource Planning projects fail to deliver benefits, or are cancelled”

– CIO.com 2010

“61% of acquisition programs fail”

– McKinsey 2009

So a 60% failure rate seems pretty much par for the course. The sad truth is that humans aren’t very good at doing some things and complex projects with many moving parts and lots of stakeholders, each with different priorities and agendas, are probably exhibit number one of this. Of course, looking at my list above, if any of the types of work described is successful, then benefits will accrue. Many things in life that would be beneficial are hard to achieve and come with no guarantee of success. I’m pretty sure that the same observation applies to Big Data.

If an organisation, or a team within it, is already good at getting stuff done (and, importantly, also has some experience in the field of data – something we will come back to soon), then I think that they will have a failure rate with Big Data implementations significantly less than 60%. If the opposite holds, then the failure rate will probably exceed 60%. Given that there is a continuum of organisational capabilities, a 60% failure rate is probably a reasonable average. The key is to make sure that your Big Data project falls in the successful 40%. Here another observation from Paul’s December 2015 article is helpful.

If you build your big data system, chances are that business users won’t come. Why? Let’s be honest—people hate change. […] Big data adoption isn’t a given. It’s possible to spend 6-12 months building out a big data system in the cloud or on premise, giving users their logins and pass-codes, and then seeing close to zero usage.

I like the beginning of this quote. Indeed, for many years my public speaking deck included the following image [5]:

Field of Dreams

I used to go on to say some variant of the following:

Generally if you only build it, they (being users) are highly unlikely to come. You need to go and get them. Why is this? Well first of all people may have no choice other than to use a transaction processing system, they do however choose whether or not to use analytical capabilities and will only do so if there is something in it for them; generally that they can do their job faster, better, or ideally both.

Second answering business questions is only part of the story. The other element is that these answers must lead to people taking action. Getting people to take action means that you are in the rather messy world of influencing people’s behaviour; maybe something not many IT types are experts in. Nevertheless one objective of a successful data programme must be to make the facilities it delivers become as indispensable a part of doing business as say e-mail. The metaphor of mildly modifying an organisation’s DNA is an apt one.

Paul goes on to stress the importance of Executive sponsorship, which is obviously a prerequisite. However, if Executive support forms the stick, then the Big Data team will need to take responsibility for growing some tasty carrots as well. It is one of my pet peeves when teams doing anything with a technological element seem to think that is up to other people (including Executive Sponsors) to do the “wet work” of influencing people to embrace the technology. Such cultural transformation should be a core competency of any team engaged in something as potentially transformational as a Big Data implementation [6]. When this isn’t the case, then I think that the likelihood of a Big Data project veering towards the unsuccessful 60% becomes greater.

Einstein on Experience

Returning to Paul’s more recent article, two of the common mistakes he lists are [7]:

  • Experience – With millions of dollars potentially invested in a big data project, “learning on the job” won’t cut it.
     
  • Team – Too many big data initiatives end up solely sponsored by IT and fail to gain business buy-in.

It was at this point that echoes from my recent piece on the risks impacting data programmes became a cacophonous clamour. My risk number 4 was:

Risk Potential Impact
4. Staff lack skills and prior experience of data programmes. Time spent educating people rather than getting on with work. Sub-optimal functionality, slippages, later performance problems, higher ongoing support costs.

And my risk number 16 was:

Risk Potential Impact
16. In the absence of [up-front focus on understanding key business decisions], the programme becoming a technology-driven one. The business gets what IT or Change think that they need, not what is actually needed. There is more focus on shiny toys than on actionable information. The programme forgets the needs of its customers.

It’s always gratifying when two professionals working in the same field [8] reach similar conclusions.

It is one thing to list problems, quite another to offer solutions. However Paul does the latter in his August 2016 article, including the following advice:

Every IT project carries risk. Open source projects, considering how fast the market changes (the rise of Apache Spark and the cooling off of MapReduce comes to mind), should invite even more scrutiny. Clearly, significant cost rises in terms of big data salaries, vendor contracts, procurement of hard to find skills and more could throw off your business value calculations. Consider a staged approach to big data as a potential panacea to reassess risk along the way and help prevent major financial disasters.

Thomas Edison

Having highlighted both the risk of failure and some of the reasons that failure can occur, Paul ends his later on a more up-beat tone:

One thing’s for sure, if you decide to pull the plug on a specific big data initiative, because it’s not delivering ROI it’s important to take your licks and learn from the experience. By doing so, you will be that much smarter and better prepared the second time around. And because big data has the opportunity to provide so much value to your firm, there certainly will be another chance to get it right.

The mantra of “fail fast” has wormed its way into the business lexicon. My critique of an unthinking reliance on this phrase consists of the comment that failing fast is only useful if you succeed every now and again. I think being aware of the issues that Paul cites and listening to his guidance should go some way to ensuring that one of your attempts at Big Data implementation will end up in the successful category. Based on the Gartner statistic, then if you do 5 Big Data projects, your chances of all of them being unsuccessful is only 8% [9]. To turn this round there is a 92% chance that at least one of the 5 will end in success. While this sounds like a more healthy figure, the key, as Paul rightly points out, is to make sure you cut your losses early when things go badly and retain some budget and credibility to try again.

Samuel Beckett

Back in March 2009, when I wrote Perseverance, I included a quote that a colleague of mine loved to make in a business context:

Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. [10]

I think that the central point that Paul is making is that there are steps you can take to guard against failure, but that if – despite these efforts – things start to go awry with you Big Data project, “it takes leadership to make the right decision”; i.e. to quit and start again. Much as this runs against the grain of human nature, it seems like sound advice.
 


 
Notes

 
[1]
 
He has since moved on to EY.
 
[2]
 
Including:

  1. The Big Data Universe
  2. Do any technologies grow up or do they only come of age?

And some pieces scheduled to be published during the rest of February and March.

 
[3]
 
20 Risks that Beset Data Programmes.
 
[4]
 
Seemingly you can find most percentages quoted somewhere, but the following is pretty definitive:

https://www.google.co.uk/search?q=82+of+statistics+are+made+up

 
[5]
 
I would be remiss if I didn’t point out that the actual quote from Field of Dreams is “If you build it HE will come”. Who “he” refers to here is pretty much the whole point of the film.

 
[6]
 
Once more I would direct readers to my, now rather venerable, trilogy of articles devoted to this area (as well as much of the other content of this site):

  1. Marketing Change
  2. Education and cultural transformation
  3. Sustaining Cultural Change
 
[7]
 
I have taken the liberty of swapping the order of Paul’s two points to match that of my list of risks.
 
[8]
 
Clearly a corn [maize] field in the context of this article.
 
[9]
 
7.78% is a more accurate figure (and equal to 60%5 of course).
 
[10]
 
Samuel Beckett, Worstward Ho (1983).

 

 

20 Risks that Beset Data Programmes

Data Programme Risks

This article draws extensively on elements of the framework I use to both highlight and manage risks on data programmes. It has its genesis in work that I did early in 2012 (but draws on experience from the years before this). I have tried to refresh the content since then to reflect new thinking and new developments in the data arena.
 
 
Introduction

What are my motivations in publishing this article? Well I have both designed and implemented data and information programmes for over 17 years. In the majority of cases my programme work has been a case of executing a data strategy that I had developed myself [1]. While I have generally been able to steer these programmes to a successful outcome [2], there have been both bumps in the road and the occasional blind alley, requiring a U-turn and another direction to be selected. I have also been able to observe data programmes that ran in parallel to mine in different parts of various organisations. Finally, I have often been asked to come in and address issues with an existing data programme; something that appears to happens all too often. In short I have seen a lot of what works and what does not work. Having also run other types of programmes [3], I can also attest to data programmes being different. Failure to recognise this difference and thus approaching a data programme just like any other piece of work is one major cause of issues [4].

Before I get into my list proper, I wanted to pause to highlight a further couple of mistakes that I have seen made more than once; ones that are more generic in nature and thus don’t appear on my list of 20 risks. The first is to assume that the way that an organisation’s data is controlled and leveraged can be improved in a sustainable way by just kicking off a programme. What is more important in my experience is to establish a data function, which will then help with both the governance and exploitation of data. This data function, ideally sitting under a CDO, will of course want to initiate a range of projects, from improving data quality, to sprucing up reporting, to establishing better analytical capabilities. Best practice is to gather these activities into a programme, but things work best if the data function is established first, owns such a programme and actively partakes in its execution.

Data is for life...

As well as the issue of ongoing versus transitory accountability for data and the undoubted damage that poorly coordinated change programmes can inflict on data assets, another driver for first establishing a data function is that data needs will always be there. On the governance side, new systems will be built, bought and integrated, bringing new data challenges. On the analytical side, there will always be new questions to be answered, or old ones to be reevaluated. While data-centric efforts will generate many projects with start and end dates, the broad stream of data work continues on in a way that, for example, the implementation of a new B2C capability does not.

The second is to believe that you will add lasting value by outsourcing anything but targeted elements of your data programme. This is not to say that there is no place for such arrangements, which I have used myself many times, just that one of the lasting benefits of gimlet-like focus on data is the IP that is built up in the data team; IP that in my experience can be leveraged in many different and beneficial ways, becoming a major asset to the organisation [5].

Having made these introductory comments, let’s get on to the main list, which is divided into broadly chronological sections, relating to stages of the programme. The 10 risks which I believe are either most likely to materialise, or which will probably have the greatest impact are highlighted in pale yellow.
 
 
Up-front Risks

In the beginning

Risk Potential Impact
1. Not appreciating the size of work for both business and technology resources. Team is set up to fail – it is neither responsive enough to business needs (resulting in yet more “unofficial” repositories and additional fragmentation), nor is appropriate progress is made on its central objective.
2. Not establishing a dedicated team. The team never escapes from “the day job” or legacy / BAU issues; the past prevents the future from being built.
3. Not establishing a unified and collaborative team. Team is plagued by people pursuing their own agendas and trashing other people’s approaches, this consumes management time on non-value-added activities, leads to infighting and dissipates energy.
4. Staff lack skills and prior experience of data programmes. Time spent educating people rather than getting on with work. Sub-optimal functionality, slippages, later performance problems, higher ongoing support costs.
5. Not establishing an appropriate management / governance structure. Programme is not aligned with business needs, is not able to get necessary time with business users and cannot negotiate the inevitable obstacles that block its way. As a result, the programme gets “stuck in the mud”.
6. Failing to recognise ongoing local needs when centralising. Local business units do not have their pressing needs attended to and so lose confidence in the programme and instead go their own way. This leads to duplication of effort, increased costs and likely programme failure.

With risk 2 an analogy is trying to build a house in your spare time. If work can only be done in evenings or at the weekend, then this is going to take a long time. Nevertheless organisations too frequently expect data programmes to be absorbed in existing headcount and fitted in between people’s day jobs.

We can we extend the building metaphor to cover risk 4. If you are going to build your own house, it would help that you understand carpentry, plumbing, electricals and brick-laying and also have a grasp on the design fundamentals of how to create a structure that will withstand wind rain and snow. Too often companies embark on data programmes with staff who have a bit of a background in reporting or some related area and with managers who have never been involved in a data programme before. This is clearly a recipe for disaster.

Risk 5 reminds us that governance is also important – both to ensure that the programme stays focussed on business needs and also to help the team to negotiate the inevitable obstacles. This comes back to a successful data programme needing to be more than just a technology project.
 
 
Programme Execution Risks

Programme execution

Risk Potential Impact
7. Poor programme management. The programme loses direction. Time is expended on non-core issues. Milestones are missed. Expenditure escalates beyond budget.
8. Poor programme communication. Stakeholders have no idea what is happening [6]. The programme is viewed as out of touch / not pertinent to business issues. Steering does not understand what is being done or why. Prospective users have no interest in the programme.
9. Big Bang approach. Too much time goes by without any value being created. The eventual Big Bang is instead a damp squib. Large sums of money are spent without any benefits.
10. Endless search for the perfect solution / adherence to overly theoretical approaches. Programme constantly polishes rocks rather than delivering. Data models reflect academic purity rather than real-world performance and maintenance needs.
11. Lack of focus on interim deliverables. Business units become frustrated and seek alternative ways to meet their pressing needs. This leads to greater fragmentation and reputational damage to programme.
12. Insufficient time spent understanding source system data and how data is transformed as it flows between systems. Data capabilities that do not reflect business transactions with fidelity. There is inconsistency with reports directly drawn from source systems. Reconciliation issues arise (see next point).
13. Poor reconciliation. If analytical capabilities do not tell a consistent story, they will not be credible and will not be used.
14. Strong approach to data quality. Data facilities are seen as inaccurate because of poor data going into them. Data facilities do not match actual business events due to either massaging of data or exclusion of transactions with invalid attributes.

Probably the single most common cause of failure with data programmes – and indeed or ERP projects and acquisitions and any other type of complex endeavour – is risk 7, poor programme management. Not only do programme managers have to be competent, they should also be steeped in data matters and have a good grasp of the factors that differentiate data programmes from more general work.

Relating to the other highlighted risks in this section, the programme could spend two years doing work without surfacing anything much and then, when they do make their first delivery, this is a dismal failure. In the same vein, exclusive focus on strategic capabilities could prevent attention being paid to pressing business needs. At the other end of the spectrum, interim deliveries could spiral out of control, consuming all of the data team’s time and meaning that the strategic objective is never reached. A better approach is that targeted and prioritised interims help to address pressing business needs, but also inform more strategic work. From the other perspective, progress on strategic work-streams should be leveraged whenever it can be, perhaps in less functional manners that the eventual solution, but good enough and also helping to make sure that the final deliveries are spot on [7].
 
 
User Requirement Risks

Dear Santa

Risk Potential Impact
15. Not enough up-front focus on understanding key business decisions and the information necessary to take them. Analytic capabilities do not focus on what people want or need, leading to poor adoption and benefits not being achieved.
16. In the absence of the above, the programme becoming a technology-driven one. The business gets what IT or Change think that they need, not what is actually needed. There is more focus on shiny toys than on actionable information. The programme forgets the needs of its customers.
17. A focus on replicating what the organisation already has but in better tools, rather than creating what it wants. Beautiful data visualisations that tell you close to nothing. Long lists of existing reports with their fields cross-referenced to each other and a new solution that is essentially the lowest common denominator of what is already in place; a step backwards.

The other most common reasons for data programme failure is a lack of focus on user needs and insufficient time spent with business people to ensure that systems reflect their requirements [8].
 
 
Integration Risk

Lego

Risk Potential Impact
18. Lack of leverage of new data capabilities in front-end / digital systems. These systems are less effective. The data team is jealous about its capabilities being the only way that users should get information, rather than adopting a more pragmatic and value-added approach.

It is important for the data team to realise that their work, however important, is just one part of driving a business forward. Opportunities to improve other system facilities by the leverage of new data structures should be taken wherever possible.
 
 
Deployment Risks

Education

Risk Potential Impact
19. Education is an afterthought, training is technology- rather than business-focused. People neither understand the capabilities of new analytical tools, nor how to use them to derive business value. Again this leads to poor adoption and little return on investment.
20. Declaring success after initial implementation and training. Without continuing to water the immature roots, the plant withers. Early adoption rates fall and people return to how they were getting information pre-launch. This means that the benefits of the programme not realised.

Finally excellent technical work needs to be complemented with equal attention to business-focussed education, training using real-life scenarios and assiduous follow up. These things will make or break the programme [9].
 
 
Summary.

Of course I don’t claim that the above list is exhaustive. You could successfully mitigate all of the above risks on your data programme, but still get sunk by some other unforeseen problem arising. There is a need to be flexible and to adapt to both events and how your organisation operates; there are no guarantees and no foolproof recipes for success [10].

My recommendation to data professionals is to develop your own approach to risk management based on your own experience, your own style and the culture within which you are operating. If just a few of the items on my list of risks can be usefully amalgamated into this, then I will feel that this article has served its purpose. If you are embarking on a data programme, maybe your first one, then be warned that these are hard and your reserves of perseverance will be tested. I’d suggest leveraging whatever tools you can find in trying to forge ahead.

It is also maybe worth noting that, somewhat contrary to my point that data programmes are different, a few of the risks that I highlight above could be tweaked to apply to more general programmes as well. Hopefully the things that I have learnt over the last couple of decades of running data programmes will be something that can be of assistance to you in your own work.
 


 
Notes

 
[1]
 
For my thoughts on developing data (or interchangeably) information strategies see:

  1. Forming an Information Strategy: Part I – General Strategy
  2. Forming an Information Strategy: Part II – Situational Analysis and
  3. Forming an Information Strategy: Part III – Completing the Strategy

or the CliffsNotes versions of these on LinkedIn:

  1. Information Strategy: 1) General Strategy
  2. Information Strategy: 2) Situational Analysis and
  3. Information Strategy: 3) Completing the Strategy
 
[2]
 
Indeed sometimes an award-winning one.
 
[3]
 
An abridged list would include:

  • ERP design, development and implementation
  • ERP selection and implementation
  • CRM design, development and implementation
  • CRM selection and implementation
  • Integration of acquired companies
  • Outsourcing of systems maintenance and support
 
[4]
 
For an examination of this area you can start with A more appropriate metaphor for Business Intelligence projects. While written back in 2008-9 the content of this article is as pertinent today as it was back then.
 
[5]
 
I cover this area in greater detail in Is outsourcing business intelligence a good idea?
 
[6]
 
Stakeholder

Probably a bad idea to make this stakeholder unhappy (see also Themes from a Chief Data Officer Forum – the 180 day perspective, note [3]).

 
[7]
 
See Vision vs Pragmatism, Holistic vs Incremental approaches to BI and Tactical Meandering for further background on this area.
 
[8]
 
This area is treated in the strategy articles appearing in note [1] above. In addition, some potential approaches to elements of effective requirements gathering are presented in Scaling-up Performance Management and Developing an international BI strategy.
 
[9]
 
Of pertinence here is my trilogy on the cultural transformation aspects of information programmes:

  1. Marketing Change
  2. Education and cultural transformation
  3. Sustaining Cultural Change
 
[10]
 
Something I stress forcibly in Recipes for Success?

 

 

Indiana Jones and The Anomalies of Data

One of an occasional series [1] highlighting the genius of Randall Munroe. Randall is a prominent member of the international data community and apparently also writes some sort of web-comic as a side line [2].

I didn't even realize you could HAVE a data set made up entirely of outliers.
Copyright xkcd.com

Data and Indiana Jones, these are a few of my favourite things… [3] Indeed I must confess to having used a variant of the image below in each of my seminar deck and – on this site back in 2009 – a previous article, A more appropriate metaphor for Business Intelligence projects.

Raiders of the Lost Ark II would have been a much better title than Temple of Doom IMO

In both cases I was highlighting that data-centric work is sometimes more like archaeology than the frequently employed metaphor of construction. To paraphrase myself, you never know what you will find until you start digging. The image suggested the unfortunate results of not making this distinction when approaching data projects.

So, perhaps I am arguing for less Data Architects and more Data Archaeologists; the whip and fedora are optional of course!
 


 Notes

 
[1]
 
Well not that occasional as, to date, the list extends to:

  1. Patterns patterns everywhere – The Sequel
  2. An Inconvenient Truth
  3. Analogies, the whole article is effectively an homage to xkcd.com
  4. A single version of the truth?
  5. Especially for all Business Analytics professionals out there
  6. New Adventures in Wi-Fi – Track 1: Blogging
  7. Business logic [My adaptation]
  8. New Adventures in Wi-Fi – Track 2: Twitter
  9. Using historical data to justify BI investments – Part III
 
[2]
 
xkcd.com if you must ask.
 
[3]
 
Though in this case, my enjoyment would have been further enhanced by the use of “artefacts” instead.

 

 

An informed decision

Caterham 7 vs Data Warehouse appliance - spot the difference

A friend and fellow information professional is currently responsible for both building a new data warehouse and supporting its predecessor, which is based on a different technology platform. In these times of ever-increasing focus on costs, she had been asked to port the old warehouse to the new platform, thereby avoiding some licensing payments. She asked me what I thought about this idea and we chatted for a while. For some reason, our conversation went off at a bit of a tangent and I started to tell her the story of an acquaintance of mine and his recent sad experiences.

+++

My acquaintance, let’s call him Jim to avoid causing any embarassment, had always been interested in cars; driving them, maintaining them, souping them up, endlessly reading car magazines and so on. His dream had always been to build his own car and his eye had always been on a Caterham kit. I suppose for him the pleasure of making a car was at least as great, if not more, as the pleasure of driving one.

It's just like Lego

Jim saved his pennies and eventually got together enough cash to embark on his dream project. Having invested his money, he started to also invest his time and effort. However, after a few weeks of toil, he hit a snag. It was nothing to do with his slowly emerging Caterham, but to do with the more quotidian car he used for his daily commute to work. Its engine had developed a couple of niggles that had been resistant to his own attempts to fix them and he had reluctantly decided that it was in need of some new parts and quite expensive ones at that. Jim had already spent quite a bit of cash on the Caterham and more on some new tools that he needed to assemble it. The last thing he wanted to do now was to have a major outlay on his old car; particularly because, once the Caterham was finished, he had planned to trade it for its scrap-metal worth.

But now things got worse, Jim’s current car failed its MOT (vehicle safety inspection for any non-UK readers) because the faulty engine did not meet emission standards. However, one of his friends came up with a potential solution. He said, “As you have already assembled the Caterham engine, why not put this into your current car and use this instead? You can then swap it out into the Caterham chassis and body when you have built this.”

Headless Jim - with cropped face to protect his anonymity

This sounded like a great idea to Jim, but there were some issues with it. His Cateham was supplied with a Cosworth-developed 2.3-litre Ford Duratec engine. This four-cylinder twin cam unit was the wrong size and shape to fit into the cavity left by removing the worn-out engine from his commuting car. Well as I had mentioned at the start, Jim was a pretty competent amateur mechanic and he thought that he had a good chance of rising to the challenge. He was motivated by the thought of not having to shell out extra cash and in any case he loved tinkering with cars.

So he put in some new brackets to hold the Caterham engine. He then had to grind-down a couple of protruding pieces of the Duratec block to gain the extra 5 mm necessary to squeeze it in. The fuel feeds were in the wrong place, but a bit of plumbing and that was also sorted. Perhaps this might cause an issue with efficiency of the engine burn cycle, but Jim figured that it would probably be OK. Next the vibration dampers were not really up to the job of dealing with the more powerful engine and neither was the exhaust system. No worries, thought Jim, a tap of a hammer here, a bend of a pipe here and he could also add in a couple of components that had been sitting at the back of his garage rusting for years as well. Eventually everything seemed fine.

Jim ventured out of his garage in his old car, with its new engine. He was initially a bit trepidatious, but his work seemed to be hanging together. Sure the car was making a bit of a noise, shaking a bit and the oil temperature seems a bit high, but Jim felt that these were only minor problems. He told himself that all his handiwork had to do was to hang together for a few more months until he finished the rest of the Caterham.

Angular momentum = Sum over i : Ri x mi x Vi

With these nice thoughts in mind, Jim approached a bend. The car flew off the road at a tangent as he realised – too late – that he had been travelling at Caterham speeds into the corner and didn’t have a Caterham chassis, a Caterham suspension, or Caterham brakes. His old car was not up to dealing with the forces created in the turn. His tyres failed to grip and, after what seemed like an eternity of slow-motion spinning and screeching and panic, he find himself in a ditch; healthy, but with a wheel sheared off and smoke coming out of the front of the car. A later inspection confirmed that his commuting car was a write-off, and his insurance policy didn’t fully cover the cost of a new vehicle.

Jim ended up having to buy another day-to-day car, which delayed him from spending the additional money necessary to get the Caterham on the road for quite some time. However, after scrimping and saving for a while, he eventually got back to his dream project, only to find that combination of the modifications he had to make to the Duratec engine, plus the after effects of the crash meant that it was now useless and he needed to purchase a replacement.

So because Jim didn’t want to run to the expense of maintaining his old car while he built his new one, he would instead have to buy a new temporary car plus a new engine for the Caterham. Jim was just as far off from finishing the Caterham as when he had started, despite wasting a lot of time and money along the way. A very sad story.

+++

Suddenly I realised that I had been wittering on about a wholly unrelated subject to my friend’s data warehousing problem. I apologised and turned the conversation back to this. To my astonishment, she told me that she had already made up her mind. I suppose she had taken advantage of the length of time I had spent telling Jim’s story to more profitably weigh the pros and cons of different approaches in her mind and thereby had reached her decision. Anyway, she thanked me for my help, I protested that I hadn’t really offered her any and we each went our separate ways.

I found out later she had decided to pay the maintenance costs on the old data warehouse.


I would like to apologise in advance if anyone at Caterham, Cosworth, Ford, or indeed Peugeot, takes offence to any of the content of the above story or its illustrations. I’m sure that you make very fine products and this article isn’t really about any of them.

Projects

At the risk of over-extending the business metaphor offered by rock climbing (and the even greater risk of boring readers who don’t have the slightest interest in my climbing rehabilitation), here is a brief update on my injury situation; closing with the normal technology-focussed twist. I suppose that part of my motivation in composing this piece lies in the fact that some of my recent climbing-related writing has been on the negative side; albeit focusing on business lessons that can be gleaned from my past rock climbing mistakes. Instead this article adopts a more positive tone looking for ways in which signs of progress in a sport can set you up for professional success.

Back in the day... (Space Suit - V3 - Bishop, California)

I have previously explained how I managed to injure my hand climbing a while back. Given the horrendous popping noise my left ring finger made when I hurt it, it is a reasonable assumption that I have a partial pulley tear. Having already not climbed at any serious level for some time, this injury kept me away from both rock and wall for several months. On the odd occasion that I did climb, it was a rather tentative and worried affair. Part of me felt that I would not ever be able to climb even adequately again; part of me didn’t want to get a hand surgeon’s opinion, lest it confirmed my worst fears. This was not a great mental attitude to adopt obviously and I rather felt that a chunk of my life was missing, or at least going badly.

However, having recently relocated to Cambridge (England not Massachusetts), my partner and I discovered the Kelsey Kerridge Sports Centre and learnt that their indoor climbing wall was in the process of being extended and upgraded. Just before Christmas we went along, to be honest without any great expectations; either of the wall or the standard of my climbing. However we were pleasantly surprised by a relatively extensive and modern facility and some very well-set and interesting problems (for an explanation of why climbs in bouldering are called problems and indeed a definition of bouldering, see Perseverance). Another plus is that many of these used plastic holds (manufactured by Sheffield’s Core Climbing) that were quite friendly to injured fingers; or at least at the lowly grades that I was initially climbing at.

Since first going we have become regulars and even interspersed a couple of trips to our old London climbing haunt of The Arch. I have been taking the (probably psychological) precaution of using climbing finger tape to bind up the damaged area. I learnt my lesson and started on easy ground with little potential to aggravate my finger. The build up to harder climbing (for me) was measured, despite a growing desire to push myself. So far, despite a couple of twinges, it has been going OK.

The quality of setting at Kelsey Kerridge has been such that, though not much has changed at the wall since mid-December, as my climbing has steadily improved, I have been able to find more interesting problems at the next level. Indeed I seem to have found a number of projects (again see Perseverance for a definition), at an increasing level of difficulty and which have taken between two and five sessions to finally crack.

Two sessions ago, I finally got up my first indoor V4 in literally years. This was something of a landmark not only because it means that I am getting back to the vicinity of where I was pre-injuries, but more specifically as the problem requires a big, dynamic move onto a small edge for my damaged left hand. It even began to feel quite comfortable making this move after a while.

This video is of me on a V3 problem at Kelsey Kerridge

Even now, I am still taking to heart the learnings that I pointed out in earlier articles and am not trying to push things too quickly. However, I am have now completed several climbs that I could not even pull onto a few weeks back and have some harder projects on which I am making significant and somewhat surprising progress.

It feels good to be back climbing at any level and even better that my hand is – [undamaged] fingers-crossed – holding up so far. A positive learning here is that when you feel at a low ebb – as inevitably happens to the most enthusiastic of project managers, running the most dynamic and important of projects – maybe the physical act of doing something is the best antidote. Even if what you do does not work out immediately, it may provide you with other ideas that might be more successful.

Contemporaneous to this climbing progress, I am taking on new challenges in my work life. At least for me, success on climbing projects gives me a great fillip when thinking about the longer term projects I face in a work context. Success in one area of life can be contagious. Making slow, but steady progress at the wall makes me feel that many things are possible in my professional arena. It is nice to be back in what I hope will continue to be a virtuous circle.

Angelus Domini nuntiavit Sharmae...


For anyone interested in other analogies I have drawn between climbing and business and technology issues, here is a list in chronological order:

 

  1. Perseverance
  2. A bad workman blames his [Business Intelligence] tools
  3. Running before you can walk
  4. Feasibility studies continued…
  5. Incremental Progress and Rock Climbing

 

 

Incremental Progress and Rock Climbing

Ovum

Introduction

Last week Ovum and @SarahBurnett were kind enough to invite me to speak at their Business Intelligence Masterclass in London.

Unfortunately one of the Ovum presenters, Madan Sheina, was ill, but Sarah did a great job running the session. The set up of the room and the number of delegates both encouraged interaction and there was a great atmosphere with lots of questions from the attendees and some interesting exchanges of ideas. Work commitments meant that I had to leave after lunch, which was a shame as I am sure that – based on what I saw in the morning – the afternoon workshops sessions would have been both entertaining and productive.

I certainly enjoyed my presentation – on Initiating and Developing a BI Strategy – which focussed on both my framework for success in Business Intelligence and, in particular, addressing the important cultural transformation aspects of these. Thank you also to the delegates both for the questions and observations and for kindly awarding my talk an 83% rating via the now ubiquitous seminar questionnaire.

Bouldering and Cultural Transformation

Boysen's Groove (V3/4) Dinas Mot, North Wales
My partner bouldering the classic Boysen’s Groove in Snowdonia

As part of my section on change management, I covered some of the themes that I introduced in my article Perseverance. In this I spoke about one of the types of rock climbing that I enjoy; bouldering. Bouldering is regular rock climbing on steroids, it is about climbing ultra-hard, but short climbs; often on boulders – hence the name. I compared the level of commitment and persistence required for success in bouldering to the need for the same attributes in change management initiatives.

I spoke to a few different delegates about this analogy during a coffee break. One in particular came up with an interesting expansion on my rock climbing theme. He referred to how people engaged in mountaineering and multi-pitch rock climbing make progress in a series of stages, establishing a new base at a higher point before attempting the next challenge. He went on to link this to making incremental progress on IT projects. I thought this was an interesting observation and told the gentleman in question that he had provided the inspiration for a future blog article.

An introduction to lead climbing

The above video is excerpted from the introduction to Hard Grit a classic 1998 climbing film by Slackjaw productions. It features climbing on the Gritstone (a type of hard sandstone) edges of the UK’s Peak District. This famous sequence shows a pretty horrendous fall off of a Peak District test piece called Gaia at Black Rocks. Amazingly the climber received no worse injuries than a severely battered and lacerated leg. Despite its proximity to my home town of London, Gritstone climbing has never been my cup of tea – it is something of an acquired taste and one that I have never appreciated as much as its many devotees.

As an aside you can see a photo of a latter-day climber falling off the same route at the beginning of my article, Some reasons why IT projects fail. I’m glad to say in this photo, unlike the video above, the climber is wearing a helmet!

What the clip illustrates is the dangers inherent in the subject of this article; traditional lead climbing. OK the jargon probably needs some explanation. First of all climbing is a very broad church, in this piece I’ll be ignoring whole areas such as mountaineering, soloing and the various types of winter and ice climbing. I am going to focus on roped climbing on rock, something that generally requires dry weather (unless you are a masochist or the British weather changes on you).

In this activity, one person climbs (unsurprisingly the climber) and another holds the rope attached to them (the belayer). The belayer uses a mechanism called a belay device to do this, but we will elide these details. With my background in Business Intelligence, I’ll now introduce some dimensions with which you can “slice and dice” this activity:

  1. multi-pitch / single pitch

    Single-pitch climbs are shorter than a length of rope (typically 50-70m) and often happen on rock outcrops such as in the Peak District mentioned above. The climber completes the climb and then the belayer may follow them up if they want, or alternatively the climber might walk round to find an easy decent and the pair will then go and find another climb.


    Multi-pitch climbs consist of at least two pitches; and sometimes many more. They tend to be in a mountain environment. One person may climb a pitch and then alternate with their partner, or the same person may climb each section first. It depends on the team.

  2. top roping / leading

    Top roping is not a very precise term (bottom roping might be more accurate) but is generally taken to mean that the rope runs from the belayer, to the top of the climb and then down to the climber.A fall when top roping

    As the climber ascends, the belayer (hopefully!) takes in the slack, but (again hopefully!) without hauling the climber up the route. This means that if the climber falls (and the belayer is both competent and attentive) they should be caught by the rope almost immediately. Obviously this arrangement only works on single-pitch climbs.


    In lead climbing, or leading, the rope runs from the belayer up to the climber. As the climber ascends, they attach the rope to various points in the rock on the climb (for how they do this see the next bullet point).A leader fall - assuming that the gear holds

    Assuming that the climber is able to make a good attachment to the rock (again see next point) the issue here is how far they fall. If they climb 2m above their last attachment point, then a slip at this point will see them swinging 2m below this point – a total fall of 4m, much longer than when top roping. Also if the last attachment point is say 10m above the ground and the climber falls off say 8m above this, then slack in the system and rope stretch will probably see them hit the ground; something that should never happen in top roping.

    [As an aside true top roping is what happens when the belayer climbs up after the climber. Here they are now belayed by the original climber from above. However no one uses the term top roping for this, instead they talk about bringing up the second, or seconding. Top roping is reserved for the practice of bottom roping described above, no one said that climbing was a logical sport!]

  3. sport / traditional In the last point I referred to a lead climber mysteriously attaching themselves to the rock as they ascend. The way that they do this determines whether they are engaged in sport or traditional climbing (though there is some blurriness around the edges).

    In sport climbing, holes are pre-drilled into the rock at strategic intervals (normally 3-5m apart, but sometimes more). Into these are glued either a metal staple or a single bolt with a metal hanger on it that has a hole in it.Staple or bolt with hanger - used in Sport Climbing

    The process of equiping a sport route in this way can take some time, particularly if it is overhanging and of course it needs to be done well if the bolts are to hold a climber’s fall. A single-pitch sport climb may have 10 or more of these bolts, plus generally a lower-off point at the top.

    DMM Phantom Quick-draw (or extender)

    The climber will take with them at least the same number of quick draws as there are bolts. These are two spring-loaded carabiners joined by a section of strong tape. As the climber ascends, they clip one end of a quick-draw to the staple or hanger and the other end over the rope attaching them to their belayer.

    So long as the person who drilled and inserted the bolts did a good job and so long as the climber is competent in clipping themselves into these; then sport climbing should be relatively safe. At this point I should stress that I know of good climbers who have died sport climbing, often by making a simple mistake, often after having completed a climb and looking to lower off. Sport climbing is a relatively safer form of climbing, but it is definitively not 100% safe; no form of climbing is.

    Because of its [relative] safety, sport climbing has something of the ethos of bouldering, with a focus on climbing at your limit as the systems involved should prevent serious injury in normal circumstances.


    In traditional climbing (uniformly called trad) the difference is that there are no pre-placed bolts, instead the climber has to take advantage of the nature of the rock to arrange their own attachment points. This means that you have to take the contents of a small hardware store with you on your climb. The assorted pieces of gear that you might use to protect yourself include: Nuts/wires (which you try to wedge into small cracks):

    A selection of DMM wall-nuts

    Hexes (which you try to wedge into large cracks):

    Wild Country Hexacentrics

    Cams/Friends (spring-loaded mechanical devices that you place in parallel cracks – the latter name being a make of cams):

    Black Diamond Cam-a-lots

    Slings (which you use to lasso spikes, or thread through any convenient holes in the rock):

    Dynema slings

    Once you have secured any of the above into or around the rock, you clip in with a quick-draw as in Sport climbing and heave a sigh of relief.

In the video that started this section, Jean-minh Trin-thieu falls (a long way) on to a cam, which thankfully holds. The issue on this particular climb is that there are no more opportunities to place gear after the final cam at round about half-way up. The nature of the rock means that a lot of Gritsone climbing is like this; one of the reasons that it is not a favourite of mine.

In any case, having established the above dimensions, I am going to drill down via two of them to concentrate on just trad leading. My comments apply equally to multi- and single-pitch, but the former offers greater scope for getting yourself into trouble.

The many perils of trad leading

This is why they call lead climbing "the sharp end"
Dave Birkett on “Nowt Burra Fleein’ thing” E8 6c, Cam Crag, Wasdale, The English Lake District – © Alastair Lee – Posingproductions.com

One of the major issues with trad climbing, particularly multi-pitch trad climbing in a mountain environment is that you are never quite sure what you need to take. The more gear you clip to your harness, the more likely you are to be able to deal with any eventuality, but the heavier you are going to be and the harder it will be to climb. Some one once compared trad leading to climbing wearing a metal skirt.

The issue here is that not only do you have to find somewhere to place this protective gear, you have to place it well so that it is not dislodged as you climb past, or pulls out if you fall. What adds to this problem is that you may have to try to place say a wire in a situation where you are holding on to a small hold with one hand, with only one foot on a hold and the other dangling. You may also be on an overhang and thus with all gravity’s force coming to bear on your tendons. At such moments thoughts like “how far below was my last piece of gear?”, “how confident am I that I placed it well?” and “what happens if I can’t fiddle this piece of metal into this crack before my fingers un-peal?” tend to come to mind with alarming ease.

It is not unheard of for a trad leader to climb up many metres, placing an assortment of gear en route, only to fall off and have all of it rip out, a phenomenon call “unzipping”, thankfully not something I have experienced directly; though I have seen it happen to other people.

These additional uncertainties tend to lead to a more cautious approach to trad leading, with many people climbing within their abilities on trad climbs. Some people push themselves on trad and some get away with it for a while. However there is a saying about there being old climbers and bold climbers, but no old bold climbers.

The links with business projects

El Capitan, Yosemite, CA.

I have written quite a few times before about the benefits of an incremental approach, so long as this bears the eventual strategic direction in mind (see for example: Tactical Meandering and Holistic vs Incremental approaches to BI). In rock climbing, even within a single pitch, it is often recommended to break this into sections, particularly if there are obvious places (e.g. ledges) where you can take a bit of a rest and consider the next section. This also helps with not being too daunted; often the biggest deal is to start climbing and once you are committed then things become easier (though of course this advice can also get you in over your head on occasion).

Splitting a climb into sections is a good idea, but – in the same way as with business projects – you need to keep your eye on your eventual destination. If you don’t you may be so focussed on the current moves that you go off route and then have to face potentially difficult climbing to get back where you need to be. The equivalent in business would be projects that do not advance the overall programme.

However the analogy doesn’t stop there. If we break a single-pitch trad lead climb into smaller sections, those between each piece of gear that you place, then it is obvious that you need to pay particular attention to the piece of equipment that you are about to employ. If you do this well, then you have minimised the distance that you will fall and this will bolster your confidence for the next piece of climbing. If you rush placing your gear, or assume that it is sort of OK, then at the best you will give yourself unnecessary concerns about your climbing for the next few metres. At worst a fall could lead to this gear ripping and a longer fall, or even hitting the ground.

In business projects, if you take an incremental approach, then in the same way you must remember that you will be judged on the success or failure of the most recent project. Of course if you have a track record of earlier success then this can act as a safety net; the same as when your highest piece of gear fails, but the next one catches you. However, it is not the most comfortable of things to take a really long leader fall and similarly it is best to build on the success of one project with further successes instead of resting on your laurels.

Of course the consequences of rushing your interim steps in rock climbing can be a lot more terminal than in business. Nevertheless failure in either activity is not welcome and it is best to take every precaution to avoid it.
 

Business Sponsorship

All contributions to this very deserving cause are most welcome
 
Strong business sponsorship is generally cited as a major success factor for IT projects. From one perspective this is essentially a truism, but looking at the phrase from a different angle perhaps reveals something of interest – indeed perhaps it highlights a reason for some IT projects failing. Let’s look at a definition to start with:
 

  sponsor /spónsər/ n. & v. • n. 2 a a person or organization that promotes or supports an artistic or sporting activity etc. (O.E.D.)  

 
There are other definitions, but maybe surprisingly the one I show above is probably the closest to the meaning of “business sponsorship”. The very first entry in my Oxford English Dictionary for this word is one that brings back memories:
 

  sponsor /spónsər/ n. & v. • n. 1 a person who supports an activity done for charity by pledging money in advance. (O.E.D.)  

 
This takes me back to school (a long time ago) when every year we had a sponsored 20 mile (32 km) walk around the streets of London, each time for a different charity. In an age before such events became mainstream, I believe we held some record for the amount of money raised. It is surprising how many hills you can fit into 20 miles, even in London, and I can well remember how tired I was after doing this as an eleven-year-old.

A good place to go and ask for sponsorship money

I can also recall wandering from house-to-house in my neighbourhood, knocking on doors with my sponsorship form to ask for pledges. As a rather naive child I never really understood why some people were occasionally a little disgruntled to have me appear on their doorstep at 9am on a Sunday. Of course, post walk, I had to do the same rounds again to collect the money. I escapes me how much I raised, several hundred pounds I think, but I also remember some people raising a lot more than that.

Both of the above definitions have the connotation of a kindly benefactor indulging a pet cause, be that the arts, or a small schoolboy. There is also the sense that the sponsor is vicariously involved, no one is asking them to play a recital, or to walk 20 miles. Perhaps here we begin to detect the seed of a problem.

When I read IT people on various on-line forums speaking about ensuring business sponsorship, or gaining business buy in, I get the strong impression of an idea originated in IT which is seeking support. Some of the recent discussions on LinkedIn.com, which formed the basis for my earlier article: Who should be accountable for data quality? are a case in point. Several contributors have made comments along the lines of “IT needs to educate the business about the importance of data quality” – as well as being rather patronising, I think that this perspective on business life is rather wrong-headed.

In my mind it takes me back to an IT colleague (at which company I will not mention) saying “of course we [i.e. IT people] are so much smarter than them [i.e. non-IT people]”. To this day I am unsure whether he was joking or not. In my experience, IT people are just like non-IT people, some are smart, some are not, most are somewhere in between – I suspect the distribution is pretty similar in both cases.

Why have a dog and Bach yourself?

So when people talk about business sponsorship, maybe this is code for convincing the paymasters that some of IT’s ideas are worth spending money on. Maybe it is the same as a penniless 18th century musician seeking the indulgence of a feudal monarch. IT may have all of the tunes, but he who pays the piper…

On the other hand, if IT and non-IT were well-aligned then maybe it would be more of a case of the business seeking IT sponsorship; i.e. of business folk originating ideas and IT working out how to implement them. Of course I tend to be an advocate of a partnership approach. I read recently on a LinkedIn.com thread about some IT departments being active and others passive. I would recommend IT being active, but not in the sense of pursuing its own agenda, or feeling (as perhaps my IT colleague did) that it knows best.

This was the noblest IT project of them all...

Maybe instead of seeking business sponsorship – which sounds rather like what you would do after IT had already figured out what to do and why – it would make sense to seek business engagement much earlier in the piece – this would hopefully lead to jointly crafted approaches that have business support baked-in from the outset. Surely this is preferable to the corporate equivalent of going door-to-door soliciting money, no matter how noble the cause might appear the the IT person who originated it.