A recording of me being interviewed by Brian Roger of SmartDataCollective.com

SmartDataCollective.com

I have been a featured blogger on SmartDataCollective.com almost as long as I have been a blogger. SDC.com is Social Media Today’s community site, focussed on all aspects of Business Intelligence, Data Warehousing and Analytics, with a pinch of social media thrown in to the mix.

Brian Roger, the SDC.com editor, was recently kind enough to interview me about my career in BI, the challenges I have faced and what has helped to overcome these. This interview is now available to listen to as part of their Podcast series – click on the image below to visit their site.

sdc-podcast

SmartDataCollective.com Intervew

I would be interested in feedback about any aspect of this piece, which I am grateful to Brian for arranging.
 


 
Social Media Today LLC helps global organizations create purpose-built B2B social communities designed to achieve specific, measurable corporate goals by engaging exactly the customers and prospects they most want to reach. Social Media Today helps large companies leverage the enormous power of social media to build deeper relationships with potential customers and other constituencies that influence the development of new business. They have found that their primary metrics of success are levels of engagement and business leads. One thousand people who come regularly and might buy an SAP, Oracle or Teradata system some day is better than a million people who definitely won’t.

Social Media Today LLC, is a battle-tested, nimble team of former journalists, online managers, and advertising professionals who have come together to make a new kind of media company. With their backgrounds, and passions for, business-to-business and public policy conversations, they have decided to focus their efforts in this area. To facilitate the types of convresations that they would like to see Social Media Today is assembling the world’s best bloggers and providing them with an independent “playground” to include their posts, to comment and rate posts, and to connect with each other. On their flagship site, SocialMediaToday.com, they have brought together many of the most intriguing and original bloggers on media and marketing, covering all aspects of what makes up the connective tissue of social media from a global perspective.
 

A blast from the past…

Gutenberg Printing Press

I recently came across a record of my first ever foray into the world of Business Intelligence, which dates back to 1997. This was when I was still predominantly an ERP person and was considering options for getting information out of such systems. The piece in question is a brief memo to my boss (the CFO of the organisation I was working at then) about what I described as the OLAP market.

This was a time of innocence, when Google had not even been incorporated, when no one yet owned an iPod and when, if you tried to talk to someone about social media, they would have assumed that you meant friendly journalists. All this is attested to by the fact that this was a paper memo that was printed and circulated in the internal mail – remember that sort of thing?

Given that the document has just had its twelfth birthday, I don’t think I am breeching any confidences in publishing it, though I have removed the names of the recipients for obvious reasons.
 

INTERNAL MEMORANDUM
To: European CFO
cc: Various interested parties
From: Peter Thomas
Date: 16th June 1997
Subject: What is OLAP?

 
On-Line Analytical Processing (OLAP) is a category of software technology that enables analysts, managers and executives to gain insight into data. This is achieved by providing fast, consistent, interactive access to a wide variety of possible views of information. This has generally been transformed from raw data to reflect the real dimensionality of the enterprise.

There are around 30 vendors claiming to offer OLAP products. A helpful report by Business Intelligence[1] (an independent research company) estimates the market share of these . As many of these companies sell other products, the following cannot be viewed as 100% accurate. However the figures do provide some interesting reading.
 

Vendor

1996

1995

Market Position

Share (%)

Market Position

Share (%)

Oracle

1

19.0%

1

20.0%

Hyperion Software

2

18.0%

2

19.0%

Comshare

3

12.0%

3

16.0%

Cognos

4

9.0%

4

5.0%

Arbor Software

5

4.8%

7

2.9%

Holistic Systems

6

4.3%

6

4.7%

Pilot Software

7

4.0%

5

4.8%

MircoStrategy

8

3.5%

9

2.1%

Planning Sciences

9

2.6%

8

2.3%

Information Advantage

10

1.8%

10

1.4%

 
In this group, some companies (Hyperion, Comshare, Holistic, Pilot Software and Planning Services) provide either complete products or extensive toolkits. In contrast some vendors (such as Arbor and – outside the top ten – Applix) only sell specialist multi-dimensional databases. Others (e.g. Cognos and – outside the top ten – BusinessObjects and Brio Technology), offer client based OLAP tools which are basically sophisticated report writers. The final group (including MicroStrategy and Information Advantage) offer a mixed relational / dimensional approach called Relational OLAP or ROLAP.

If we restrict ourselves to the “one-stop solution” vendors in the above list, it is helpful to consider the relative financial position of the top three.
 

Vendor

Market Cap.[2]

($m)

Turnover

($m)

Profit

($m)

Oracle

32,405

4,223[3]

603

Hyperion Software

335

173[4]

9

Comshare

132

119[5]

(9)

 

[1] The OLAP Report by Nigel Pendse and Richard Creeth © Business Intelligence 1997
[2] As at June 1997
[3] 12 months to March 1997
[4] 12 months to June 1996
[5] 12 months to June 1996

 


 
It is of course also worth pointing out that I used to disagree with what Nigel Pendse wrote a lot less back then!
 

A single version of the truth?

linkedin The Data Warehousing Institute The Data Warehousing Institute (TDWI™) 2.0

As is frequently the case, I was moved to write this piece by a discussion on LinkedIn.com. This time round, the group involved was The Data Warehousing Institute (TDWI™) 2.0 and the thread, entitled Is one version of the truth attainable?, was started by J. Piscioneri. I should however make a nod in the direction of an article on Jim Harris’ excellent Obsessive-Compulsive Data Quality Blog called The Data Information Continuum; Jim also contributed to the LinkedIn.com thread.

Standard note: You need to be a member of both LinkedIn.com and the group mentioned to view the discussions.
 
 
Introduction

A Calabi–Yau manifold

Here are a couple of sections from the original poster’s starting comments:

I’ve been thinking: is one version of the truth attainable or is it a bit of snake oil? Is it a helpful concept that powerfully communicates a way out of spreadmart purgatory? Or does the idea of one version of the truth gloss over the fact that context or point of view are an inherent part of any statement about data, which effectively makes truth relative? I’m leaning toward the latter position.

[…]

There can only be one version of the truth if everyone speaks the same language and has a common point of view. I’m not sure this is attainable. To the extent that it is, it’s definitely not a technology exercise. It’s organizational change management. It’s about changing the culture of an organization and potentially breaking down longstanding barriers.

Please join the group if you would like to read the whole post and the subsequent discussions, which were very lively. Here I am only going to refer to these tangentially and instead focus on the concept of a single version of the truth itself.

Readers who are not interested in the ellipitcal section of this article and who would instead like to cut to the chase are invited to click here (warning there are still some ellipses in the latter sections).
 
 
A [very] brief and occasionally accurate history of truth

The demise of a cherry tree

I have discovered a truly marvellous proof of the nature of truth, which this column is too narrow to contain.

— Pierre de Tomas (1637)

Instead of trying to rediscover M. Tomas’ proof, I’ll simply catalogue some of the disciplines that have been associated (rightly or wrongly) with trying to grapple with the area:

  • Various branches of Philosophy, including:
    • Metaphysics
    • Epistemology
    • Ethics
    • Logic
  • History
  • Religion (or more perhaps more generally spirituality)
  • Natural Science
  • Mathematics
  • and of course Polygraphism

Lie algebra

Given my background in Pure Mathematics the reader might expect me to trumpet the claims of this discipline to be the sole arbiter of truth; I would reply yes and no. Mathematics does indeed deal in absolute truth, but only of the type: if we assume A and B, it then follows that C is true. This is known as the axiomatic approach. Mathematics makes no claim for the veracity of axioms themselves (though clearly many axioms would be regarded as self-evidently true to the non-professional). I will also manfully resist the temptation to refer to the wrecking ball that Kurt Gödel’s took to axiomatic systems in 1931.

Physical science

I have also made reference (admittedly often rather obliquely) to various branches of science on this blog, so perhaps this is another place to search for truth. However the Physical sciences do not really deal in anything as absolute as truth. Instead they develop models that approximate observations, these are called scientific theories. A good theory will both explain aspects of currently observed phenomena and offer predictions for yet-to-be-observed behaviour (what use is a model if it doesn’t tell us things that we don’t already know?). In this way scientific theories are rather like Business Analytics.

Unlike mathematical theories, the scientific versions are rather resistant to proof. Somewhat unfairly, while a mountain of experiments that are consistent with a scientific theory do not prove it, it takes only one incompatible data point to disprove it. When such an inconvenient fact rears its head, the theory will need to be revised to accommodate the new data, or entirely discarded and replaced by a new theory. This is of course an iterative process and precisely how our scientific learning increases. Warning bells generally start to ring when a scientist starts to talk about their theory being true, as opposed to a useful tool. The same observation could be made of those who begin to view their Business Analytics models as being true, but that is perhaps a story for another time.

The Thinker

I am going to come back to Physical science (or more specifically Physics) a little later, but for now let’s agree that this area is not going to result in defining truth either. Some people would argue that truth is the preserve of one of the other subjects listed above, either Philosophy or Religion. I’m not going to get into a debate on the merits of either of these views, but I will state that perhaps the latter is more concerned with personal truth than supra-individual truth (otherwise why do so many religious people disagree with each other?).

Discussing religion on a blog is also a certain way to start a fire, so I’ll move quickly on. I’m a little more relaxed about criticising some aspects of Philosophy; to me this can all too easily descend into solipism (sometimes even quicker than artificial intelligence and cognitive science do). Although Philosophy could be described as the search for truth, I’m not convinced that this is the same as finding it. Maybe truth itself doesn’t really exist, so attempting to create a single version of it is doomed to failure. However, perhaps there is hope.
 
 
Trusting your GUT feeling

Physicists have a sense of humour too you know...
© xkcd.com

After the preceding divertimento, it is time to return to the more prosaic world of Business Intelligence. However there is first room for the promised reference to Physics. For me, the phrase “a single version of the truth” always has echoes of the search for a Grand Unified Theory (GUT). Analogous to our discussions about truth, there are some (minor) definitional issues with GUT as well.

Some hold that GUT applies to a unification of the electromagnetic, weak nuclear and strong nuclear forces at very high energy levels (the first two having already been paired in the electroweak force). Others that GUT refers to a merging of the particles and forces covered by the Standard Model of Quantum Mechanics (which works well for the very small) with General Relativity (which works well for the very big). People in the first camp might refer to this second unification as a ToE (Theory of Everything), but there is sometimes a limit to how much Douglas Adams’ esteemed work applies to reality.

For the purposes of this article, I’ll perform the standard scientific trick of a simplifying assumption and use GUT in the grander sense of the term.

Scientists have striven to find a GUT for decades, if not centuries, and several candidates have been proposed. GUT has proved to be something of a Holy Grail for Physicists. Work in this area, while not as yet having been successful (at least at the time of writing), has undeniably helped to shed a light on many other areas where our understanding was previously rather dim.

This is where the connection with a single version of the truth comes in. Not so much that either concept is guaranteed to be achievable, but that a lot of good and useful things can be accomplished on a journey towards both of them. If, in a given organisation, the journey to a single version of the truth reaches its ultimate destination, then great. However if, in an another company, a single version of the truth remains eternally just over the next hill, or round the next corner, then this is hardly disastrous and maybe it is the journey itself (and the aspirations with which it is commenced on) that matters more than the destination.

Before I begin to sound too philosophical (cf. above) let me try to make this more concrete by going back to our starting point with some Mathematics and considering some Venn diagrams.
 
 
Ordo ab chao

In my experience the following is the type of situation that a good Business Intelligence programme should address:

Fragmentation

The problems here are manifold:

  1. Although the various report systems are shown as separate, the real situation is probably much worse. Each of the reporting and analysis systems will overlap, perhaps substantially, with one or more or the other ones. Indeed the overlapping may be so convoluted that it would be difficult to represent this in two dimensions and I am not going to try. This means that you can invariably ask the same question (how much have we sold this month) of different systems and get different answers. It may be difficult to tell which of these is correct, indeed none of them may be a true reflection of business reality.
  2. There are a whole set of things that may be treated differently in the different ellipses. I’ll mention just two for now: date and currency. In one system a transaction may be recorded in a month when it is entered into the system. In another it may be allocated to the month when the event actually occurred (sometimes quite a while before it is entered). In a third perhaps the transaction is only dated once it has been authorised by a supervisor.

    In a multi-currency environment reports may be in the transactional currency, rolled-up to the currency of the country in which they occurred, or perhaps aggregated across many countries in a number of “corporate” currencies. Which rate to use (rate on the day, average for the month, rolling average for the last year, a rate tied to some earlier business transaction etc.) may be different in different systems, equally the rate may well vary according to the date of the transaction (making the last set of comments about which date is used even more pertinent).

  3. A whole set of other issues arise when you begin to consider things such as taxation (are figures nett or gross), discounts, commissions to other parties, phased transactions and financial estimates. Some reports may totally ignore these, others my take account of some but not others. A mist of misunderstanding is likely to arise.
  4. Something that is not drawn on the above diagram is the flow of data between systems. Typically there will be a spaghetti-like flow of bits and bytes between the different areas. What is also not that uncommon is that there is both bifurcation and merging in these flows. For example, some sorts of transactions from Business Unit A may end up in the Marketing database, whereas others do not. Perhaps transactions carried out on behalf of another company in the group appear in Business Unit B’s reports, but must be excluded from the local P&L. The combinations are almost limitless.

    Interfaces can also do interesting things to data, re-labelling it, correcting (or so their authors hope) errors in source data and generally twisting the input to form output that may be radically different. Also, when interfaces are anything other than real-time, they introduce a whole new arena in which dates can get muddled. For instance, what if a business transaction occurred in a front-end system on the last day of a year, but was not interfaced to a corporate database until the first day of the next one – which year does it get allocated to in the two places?

  5. Finally, the above says nothing about the costs (staff and software) of maintaining a heterogeneous reporting landscape; or indeed the costs of wasted time arguing about which numbers are right, or attempting to perform tortuous (and ultimately fruitless) reconciliations.

Now the ideal situation is that we move to the following diagram:

De-fragmentation

This looks all very nice and tidy, but there are still two major problems.

  1. A full realisation of this transformation may be prohibitively expensive, or time-consuming.
  2. Having brought everything together into one place offers an opportunity to standardise terminology and to eliminate the confusion caused by redundancy. However, it doesn’t per se address the other points made from 2. onwards above.

The need to focus on what is possible in a reasonable time-frame and at a reasonable cost may lead to a more pragmatic approach where the number of reporting and analysis systems is reduced, but to a number greater than one. Good project management may indeed dictate a rolling programme of consolidation, with opportunities to review what has worked and what has not and to ascertain whether business value is indeed being generated by the programme.

Nevertheless, I would argue that it is beneficial to envisage a final state for the information architecture, even if there is a tacit acceptance that this may not be realised for years, if at all. Such a framework helps to guide work in a way that making it up as we go along does not. I cover this area in more detail in both Holistic vs Incremental approaches to BI and Tactical Meandering for those who are interested.

It is also inevitable that even in a single BI system data will need to be presented in different ways for different purposes. To take just one example, if you goal is to see how the make up of a book of business has varied over time, then it is eminently sensible to use a current exchange rate for all transactions; thereby removing any skewing of the figures caused by forex fluctuations. This is particularly the case when trying to assess the profitability of business where revenue occurs at a discrete point in the past, but costs may be spread out over time.

However, if it is necessary to look at how the organisation’s cash-flow is changing over time, then the impact of fluctuations in foreign exchange rates must be taken into account. Sadly if an American company wants to report how much revenue it has from its French subsidiary then the figures must reflect real-life euro / dollar rates (unrealised and realised foreign currency gains and losses notwithstanding).

What is important here is labelling. Ideally each report should show the assumptions under which it has been compiled at the top. This would include the exchange rate strategy used, the method by which transactions are allocated to dates, whether figures are nett or gross and which transactions (if any) have been excluded. Under this approach, while it is inevitable that the totals on some reports will not agree, at least the reports themselves will explain why this is the case.

So this is my take on a single version of the truth. It is both a) an aspirational description of the ideal situation and something that is worth striving for and b) a convenient marketing term – a sound-bite if you will – that presents a palatable way of describing a complex set of concepts. I tried to capture this essence in my reply to the LinkedIn.com thread, which was as follows:

To me, the (extremely hackneyed) phrase “a single version of the truth” means a few things:

  1. One place to go to run reports and perform analysis (as opposed to several different, unreconciled, overlapping systems and local spreadsheets / Access DBs)
  2. When something, say “growth” appears on a report, cube, or dashboard, it is always calculated the same way and means the same thing (e.g. if you have growth in dollar terms and growth excluding the impact of currency fluctuations, then these are two measures and should be clearly tagged as such).
  3. More importantly, that the organisation buys into there being just one set of figures that will be used and self-polices attempts to subvert this with roll-your-own data.

Of course none of this equates to anything to do with truth in the normal sense of the word. However life is full of imprecise terminology, which nevertheless manages to convey meaning better than overly precise alternatives.

More’s Utopia was never intended to depict a realistic place or system of government. These facts have not stopped generations of thinkers and doers from aspiring to make the world a better place, while realising that the ultimate goal may remain out of reach. In my opinion neither should the unlikelihood of achieving a perfect single version of the truth deter Business Intelligence professionals from aspiring to this Utopian vision.

I have come pretty close to achieving a single version of the truth in a large, complex organisation. Pretty close is not 100%, but in Business Intelligence anything above 80% is certainly more than worth the effort.
 

I will be giving a Business Intelligence Masterclass at “Business Process Excellence in Financial Services” London, September 22-24

Business Process Excellenece in Financial Services
 
This event will be held in London’s Canary Wharf and has the strap-line: “Improving Business Agility and Performance Whilst Reducing Cost and Complexity”.

A selection of the organisations that seminar speakers work for appears below:
 

Axa Citi Co-operative Financial Services
Deutsche Bank First Direct HSBC
Kleinwort Benson Lloyds TSB Royal Bank of Scotland
Union Bancaire Privée UniCredit Group  

 
If you would like to find out more about this event then there are a variety of ways to do this:
 

Freephone Freephone: 0800 652 2363 or +44 (0)20 7368 9300
Fax Fax: +44 (0)20 7368 9301
Mail Mail: IQPC Ltd. Anchor House
15-19 Britten Street
London SW3 3QL
Internet Internet: www.bpefinance.com
e-mail e-mail: enquire@iqpc.co.uk

I hope to maybe see some of you there.
 


 
A selected list of my previous public speaking may be viewed here.
 

“Involving users in business intelligence strategy key for success” – Christina Torode on SearchCio-Midmarket.com

Search CIO Midmarket Christina Torode

While browsing the, slightly idiosyncratically named, infoBOOM! Must-know people, ideas and opinions for mid-sized business group on LinkedIn.com today, I came across a link to an artcile about Business Intelligence on SearchCio-Midmarket.com (part of the TechTarget stable). This was by Christina Torode and is entitled, Involving users in business intelligence strategy key for success (please note that registration is required to read the full article, though this is a relatively painless process).

Christina cites the opinions of a number of industry experts and practitioners (as one would expect, the latter are mostly from the mid-market) in making her case, which is one that I pretty much agree with. These include: Boris Evelson at Forrester Research; Rob Fosnaugh, BI lead at Brotherhood Mutual Insurance Co.; and Chris Brady, CIO at Dealer Services Corp. The experiences of Rob and Chris in particular provide some useful pointers to techniques that may be appropriate for you to use in your own BI projects.

Commitment vs Involvement

I do however have one minor quibble. This is to do with the use of the word “involvement” in this context. Some of my concern may be explained by recourse to a dictionary.

  involve /invólv/ v.tr. 1 (often foll. by in) cause (a person or thing) to participate, or share the experience or effect (in a situation, activity, etc.). (O.E.D.)  

The point that I want to make is perhaps more clearly stated in the rather earthy adage about the difference between involvement and commitment relating to breakfast; this being that a chicken was involved with it, but a pig was committed to it.

To me involving business people in a BI project is not enough. It implies that IT is in the driving seat and that the project is essentially a technological one. Instead what I believe is required is a full partnership. I have written about the lengths that I have gone to in trying to achieve this in Scaling-up Performance Management and Developing an international BI strategy.

Aside: It is worth noting that the former of these articles covers a 9-month collaboration with 30 business people to define the overall BI needs of an insurance organisation in 13 European countries. This contrasts with a 2-month process at another (rather different) insurance organisation, Brotherhood Mutual, that Christina cites.

I should mention that the exercise I describe resulted in nine major reporting and analysis areas (chronologically: Profitability, Broker Management, Claims Management, Portfolio Management, Budget Management, Dashboard, Expense Management, Exposure Management and Service & Workflow) as opposed to a single one (Claims) at Brotherhood Mutual; so maybe the durations are comparable.

Either way the main lesson is that it takes time to get good requirements in BI.

The real-life examples that Christina mentions in her article seem to also lean a little more towards partnership / commitment than to involvement. It may seem that I am splitting hairs on this issue (maybe this is a byproduct of the things that I learnt about semantics yesterday), but I have seen BI projects fail to deliver on their promise specifically because the IT team became too internally focussed and lost touch with their business users after an initial (and probably inadequately thorough) requirement-gathering exercise.

Indeed I was once brought in to act as an internal consultant for a failing BI project and my main diagnosis was precisely that the business people were semi-detached from it. They had been “involved”, but this was never to a great degree and had also occurred some time in the past. My recommendation was ongoing and in-depth collaboration, to the degree that the BI team becomes a joint IT / business one with the distinctions between people’s roles blurring somewhat at the edges.

This partnership approach has worked for me (the results may be viewed here) and I have seen the lack of an IT / business partnership lead to failure in BI on a number of occasions. Rather than being the minor point I initially mentioned, I think that the difference between involvement and commitment can be make or break for a BI project.
 


 
Christina Torode has been a high tech journalist for more than a decade. Before joining TechTarget, she was a reporter for technology trade publication CRN covering a variety of beats from security and networking to telcos and the channel. She also spent time as a business reporter and editor with Eagle Tribune Publishing in eastern Massachusetts. For SearchCIO.com and SearchCIO-Midmarket, Christina covers business applications and virtualization technologies.
 

Literary calculus?

Seth Grimes Jean-Michel Texier
@sethgrimes @jmtexier

As mentioned in my earlier article, A first for me…, I was lucky enough to secure an invitation to an Nstein seminar held in London’s Covent Garden today. The strap-line for the meeting was Media Companies: The Most to Gain from Web 3.0 and the two speakers appear above (some background on them is included at the foot of this article). I have no intention here of rehashing everything that Seth and Jean-Michel spoke about, try to catch one or both of them speaking some time if you want the full details, but I will try to pick up on some of their themes.

Seth spoke first and explained that, rather than having the future Web 3.0 as the centre of the session, he was going to speak more about some of the foundational elements that he saw as contributing to this, in particular text mining and semantics. I have to admit to being a total neophyte when it comes to these areas and Seth provided a helpful introduction including the thoughts of such early luminaries as Hans Peter Luhn and drawing on sources of even greater antiquity. An interesting observation in this section was that Business Intelligence was initially envisaged as encompassing documents and text, before it evolved into the more numerically-focused discipline that we know today.

Seth moved on to speak about the concept of the semantic web where all data and text is accompanied by contextual information that allows people (or machines) to use it; enabling a greatly increased level of “data, information, and knowledge exchange.” The deficiencies of attempting to derive meaning from text, based solely on statistical analysis were covered and, adopting a more linguistic approach, the issue of homonyms, where meaning is intrinsicly linked to context, was also raised. The dangers of a word-by-word approach to understanding text can perhaps be illustrated by reference to the title of this article.

Such problems can be seen in the results that are obtained when searching for certain terms, with some items being wholly unrelated to the desired information and others related, but only in such a way that their value is limited. However some interesting improvements in search were also highlighted where the engines can nowadays recognise such diverse entities as countries, people and mathematical formulae and respond accordingly; e.g.

http://www.google.co.uk/search?&q=age+of+the+pope.

Extending this theme, Seth quoted the following definition (while stating that there were many alternatives):

Web 3.0 = Web 2.0 + Semantic Web + Semantic Tools

One way of providing semantic information about content is of course by humans tagging it; either the author of the content, or subsequent reviewers. However there are limitations to this. As Jean-Michel later pointed out, how is the person tagging today meant to anticipate future needs to access the information? In this area, text mining or text analytics can enable Web 3.0 by the automatic allocation of tags; such an approach being more exhaustive and consistent than one based solely on human input.

Seth reported that the text analytics market has been holding up well, despite the current economic difficulties. In fact there was significant growth (approx. 40%) in 2008 and a good figure (approx. 25%) is also anticipated in 2009. These strong figures are driven by businesses beginning to realise the value that this area can release.

Seth next went through some of the high-level findings of a survey he had recently conducted (partially funded by Nstein). Amongst other things, this covers the type of text sources that organisations would like to analyse and the reasons that they would like to do this. I will leave readers to learn more about this area for themselves as this paper is due to be published in the near future. However, a stand-out finding was the level of satisfaction of users of text analytics. Nearly 75% of users described themselves as either very satisfied or satisfied. Only 4% said that they were dissatisfied. Seth made the comment, with which I concur, that these are extraordinarily high figures for a technology.

Jean-Michel took over at the half way point. Understandably a certain amount of his material was more focussed on the audience and his company’s tools, whereas Seth’s talk had been more conceptual in nature. However, he did touch on some of the technological components of the semantic web, including Resource Description Framework (RDF), Microformat, Web Ontology Language (OWL – you have to love Winnie the Pooh references don’t you?) and SPARQL. I’ll cover Jean-Michel’s comments in less detail. However a few things stuck in my mind, the first of these being:

  • Web 1.0 was for authors
  • Web 2.0 is for users (and includes the embracement of interaction)
  • Web 3.0 is also for machines (opening up a whole range of possibilities)

Second Jean-Michel challenged the adage that “Content is King” suggesting that this was slowly, but surely morphing into “Context is King”, offering some engaging examples, which I will not plagiarise here. He was however careful to stress that “content will remain key”.

All-in-all the two-hour session was extremely interesting. Both speakers were well-informed and engaging. Also, at least for a novice in the area like me, some of the material was very thought-provoking. As some one who is steeped in the numeric aspects of business intelligence, I think that I have maybe had my horizons somewhat broadened as a result of attending the seminar. It is difficult to think of a better outcome for such a gathering to achieve.
 


 
UPDATE: Seth has also written about his presentations on his BeyeNetwork blog. You can read his comments and find a link to a recording of the presentations here.
 


Seth Grimes Seth Grimes is an analytics strategy consultant, a recognized expert on business intelligence and text analytics. He is contributing editor at Intelligent Enterprise magazine, founding chair of the Text Analytics Summit, Data Warehousing Institute (TDWI) instructor, and text analytics channel expert at the Business Intelligence Network. Seth founded Washington DC-based Alta Plana Corporation in 1997. He consults, writes, and speaks on information-systems strategy, data management and analysis systems, industry trends, and emerging analytical technologies.

Jean-Michel Texier Jean-Michel Texier has been building digital solutions for media companies since the early days of the Internet. He founded Eurocortex, in France, where he built content management solutions specifically for press and media companies. When the company was acquired by Nstein Technologies in 2006, Texier took over as CTO and chief visionary, helping companies organize, package and monetize content through semantic analysis.

Nstein Nstein Technologies (TSX-V: EIN) develops and markets multilingual solutions that power digital publishing for the most prestigious newspapers, magazines, and content-driven organizations. Nstein’s solutions generate new revenue opportunities and reduce operational costs by enabling the centralization, management and automated indexing of digital assets. Nstein partners with clients to design a complete digital strategy for success using publishing industry best practices for the implementation of its Web Content Management, Digital Asset Management, Text Mining Engine and Picture Management Desk products. www.nstein.com

 

A first for me…

twitter.com

Today I went along to an Nstein seminar entitled, Media Companies: The Most to Gain from Web 3.0. The two speakers were: Seth Grimes, founder of business analytics consulting firm, Alta Plana, and contributing editor of Intelligent Enterprise; and Jean-Michel Texier, CTO of Nstein and expert in semantic analysis.

The meeting was held in Covent Garden, London and I’ll be writing a report in the near future. However, this brief article focusses on something else. I received my invitation to the event through Seth himself after having made contact with him on twitter.com (you can follow Seth at @sethgrimes).

I suppose that I first started frequenting internet forums (or bulletin boards as they were then called) back in 1998/9. The first person that I met in real life, having got to know him on-line was a guy from Sweden called Anders, who happened to be taking a vacation in London. That was some point in 1999 after we had struck up a friendship by forum and e-mail and indeed spoken on the ‘phone. Since getting into climbing in 2004, I have also been a member of a climbing forum and have met (and climbed with) multiple people IRL after striking up an acquaintance on-line. This channel for meeting people has expanded with social media such as Facebook (most of the people I know on Facebook are climbers).

However, I have generally kept personal and professional separate on-line. An accident of history means that twitter.com is essentially a professional outlet for me. Which brings me back to the first referred to in the title. Seth has the somewhat dubious honour of being the first tweep that I have met IRL (not having known them before). It is also somewhat interesting to note that this occurred, more or less to the month, 10 years after my first personal encounter of this sort.

Perhaps this says something about the relative adoption speeds of new technologies and the opportunities that they offer for interaction when considering personal and professional domains. In my case at least, there was a decade “lost” in between the former and the latter. Maybe I should be thinking about making up for lost time.
 

“Does Business Intelligence Require Intelligent Business?” by George M. Tomko

CIO Rant George M Tomko

Introduction

George Tomko’s CIO Rant has been on my list of recommended sites for quite some time. I also follow George on twitter.com (http://twitter.com/gmtomko) and have always found his perspective on business and technology matters to be extremely interesting and informative.

George’s latest blog post is is on a subject that is clearly close to my heart and is entitled Does Business Intelligence Require Intelligent Business? I should also thank him for quoting my earlier artcile, Data – Information – Knowledge – Wisdom, in this. Being mentioned in the same breath as Einstein is always gratifying as well!

George acknowledges that this is something of a “What comes first – the chicken or the egg?” situation. He starts out by building on an article by Gerry Davis at Heidrick & Struggles to state:

  1. collecting [information about customers] is “easy”
  2. analyzing it is hard
  3. disseminating it is very hard

Kudos to the first reader to correctly identify the mountain

Both George and Gerry agreed that the mountains of data that many organisations compile are not always very effectively leveraged to yield information, let alone knowledge or wisdom. Gerry proposes:

identifying and appointing the right executive — someone with superb business acumen combined with a sound technical understanding — and tasking them with delivering real business intelligence

George assesses this approach through the prism of the the three points listed above and touches on the ever present challenges of business silos; agreeing that the type of executive that Gerry recommends appointing could be effective in acting across these. However he introduces a note of caution, suggesting that it may be more difficult than ever to kick-off cross-silo initiatives in today’s turbulent times.

I tend to agree with George on this point. Crises may deliver the spark necessary for corporate revolution and unblock previously sclerotic bureaucracies. However, they can equally yield a fortress mentality where views become more entrenched and any form or risk taking or change is frowned upon. The alternative is incrementalism, but as George points out, this is not likely to lead to a major improvement in the “IQ” of organisations (this is an area that I cover in more detail in Holistic vs Incremental approaches to BI).
 
 
The causality dilemma

Which came first?

Returning to George’s chicken and egg question, do intelligent enterprises build good business intelligence, or does good business intelligence lead to more intelligent enterprises? Any answer here is going to vary according to the organisations involved, their cultures, their appetites for change and the environmental challenges and evolutionary pressures that they face.

Having stated this caveat, my own experience is of an organisation that was smart enough to realise that it needed to take better decisions, but maybe not aware that business intelligence was a way to potentially address this. I spoke about this as one of three sceanrios in my recent artcile, “Why Business Intelligence projects fail”. Part of my role in this organisation (as well as building a BI team from scratch and developing a word-class information architecture) was to act as evangelist the benefits of BI.

The work that my team did in collaboration with a wide range of senior business people, helped an organisation to whole-heartedly embrace business intelligence as a vehicle to increasing its corporate “IQ”. Rather than having this outcome as a sole objective, this cultural transfomation had the significant practical impact of strongly contributing to a major business turn-around from record losses over four years, to record profits sustained over six. This is precisely the sort of result that well-designed, well-managed BI that addresses important business questions can (and indeed should) deliver.
 
 
Another sporting analogy

I suppose that it can be argued that only someone with a strong natural aptitude for a sport can become a true athlete. Regardless of their dedication and the amount of training they undertake, the best that lesser mortals can aspire to is plain proficiency. However, an alternative perspective is that it is easy enough to catalogue sportsmen and women who have failed to live up to their boundless potential, where perhaps less able contemporaries have succeeded through application and sheer bloody-minded determination.

I think the same can be said of the prerequisites for BI success and the benefits of successful BI. Organisations with a functioning structure, excellent people at all levels, good channels of communication and a clear sense of purpose are set up better to succeed in BI than their less exemplary competitors (for the same reason that they are set up better to do most things). However, with sufficient will-power (which may initially be centred in a very small group of people, hopefully expanding over time), I think that it is entirely possible for any organisation to improve what it knows about its business and the quality of the decisions it takes.

Good Business Intelligence is not necessarily the preserve of elite organisations – it is within the reach of all organisations who possess the minimum requirements of the vision to aspire to it and the determination to see things through.
 


 
George M. Tomko is CEO and Executive Consultant for Tomko Tek LLC, a company he founded in 2006. With over 30 years of professional experience in technology and business, at the practitioner and executive levels, Mr. Tomko’s goal is to bring game-changing knowledge and experience to client organizations from medium-size businesses to the multidivisional global enterprise.

Mr. Tomko and his networked associates specialize in transformational analysis and decision-making; planning and execution of enterprise-wide initiatives; outsourcing; strategic cost management; service-oriented business process management; virtualization; cloud computing; asset management; and technology investment assessment.

He can be reached at gtomko@tomkotek.com
 

Data – Information – Knowledge – Wisdom

Wisdom

As is probably already apparent to regular readers of this blog, I take rather a visual approach to both understanding things and communicating them. Seldom will I leave a one-on-one meeting without having scrawled on a sheet of paper to explain my train of thought, or to ensure that I have properly understood what someone else has said; equally I tend to be an avid scribbler on flip-charts or wipe-boards during larger gatherings.

I was recently engaged in a debate about whether information was a prerequisite to knowledge; unsurprisingly I felt that it was. The discussion took place on the LinkedIn.com Business Improvement, Change Management & Turnaround group and was actually in response to one of my recent articles, “Why Business Intelligence projects fail”. This led to me thinking about the area further and, inevitably to some googling.

The above path led me to an article on systems-thinking.org entitled Data, Information, Knowledge, and Wisdom, written in 2004 by Gene Bellinger, Durval Castro and Anthony Mills. Returning to the visual theme that I introduced at the start of the article, my eyes were drawn to the following graphic (I have re-drawn this as a larger version was not available on the site, but it remains the work of Messrs Bellinger, Castro and Mills):

© Gene Bellinger, Durval Castro and Anthony Mills - systems-thinking.org
© Gene Bellinger, Durval Castro and Anthony Mills - systems-thinking.org

Of course I appreciate that systems-thinking.org piece is intended to have a broad applicability. However, to me, this schematic pithily captures the fact that Business Intelligence is not just about technology and cannot be effective in isolation. To live and breath it needs to be part of a broader framework covering the questions that its users need to answer, the actions that they take based on these answers and the iterative learning that occurs in the process.

Again thinking in terms of pictures, the data to wisdom hierarchy outlined by Bellinger et al brings another image to mind, the one appearing below:

Ascent of Man

In the same way that Natural Selection offers a compelling framework for the phenomenon of Evolution, all-pervasive business intelligence can offer a compelling framework within which an organisation can evolve towards collective wisdom. Of course, in the same way that Evolution does not always imply increased sophistication (just better adaptation to a particular niche), the technological part of business intelligence, in and of itself, does not guarantee an improved organisation. Such an outcome is instead the product of developing an appropriate vision for how the organisation will operate in the future and then working assiduously to get the organisation to embrace this.

I have often spoken about the importance of incorporating BI in an organisation’s DNA. The above analogy brings a different dimension to this metaphor. Both the evolution of species and the evolution of organisations are driven by incremental changes to what makes them tick, but also by occasional great leaps forward; a concept known as punctuated equilibrium in Evolutionary Biology. Introduction of good BI can be such a great leap forward, but hopefully without the connotation of Mao Zedong.

Returning to the original model, Data and Information may have strong technological elements (though the former certainly has more than the latter, see BI implementations are like icebergs), but Knowledge and Wisdom imply a more human angle; even in these days of automated decision-making with the results of analysis fed back into operational systems. This anthropocentric approach, in turn, raises the profile of cultural transformation in business intelligence programmes; something that my experience teaches me is crucial to their success.

These are all themes that I have written about before (e.g. in The confluence of BI and change management), but it is interesting to find a diagram that approaches the area from a different slant.

It is also helpful to learn that I am not alone in thinking that information is one of the major pillars of knowledge!
 

 

Chase Zander Forums – IT Director Report and Change Director Invitation

Following on from my series of posts about the inaugural Chase Zander IT Director Forum that I helped to organise earlier in the year, a report covering the event, which was held in Birmingham, has just been released by Chase Zander themselves.

Anyone interested in learning more about what goes on at these events is welcome to view the document, a PDF version of which may be downloaded here.
 


 
The next Chase Zander event is the Change Director Forum (attendance at which moved me to write the very first article on this blog: Business is from Mars and IT is from Venus). This will be held in London on the evening of 9th July 2009 at the following venue:

Address: St. Clement’s House
27 – 28 Clement’s Lane
London EC4N 7AE
Nearest tubes: Bank or Monument
Map: click here

 
Registration starts at 17:30 and the event itself kicks of at 18:15.

Details of the programme will be published nearer the date.

Attendance is free, but prior registration is required. Please mail Emily White at emily.white@chasezander.com or call her on 0870 997 9014.