The pace of change in the field of database technology seems to be constantly accelerating. No doubt in five year’s time [1], Big Data and the Hadoop suite [2] will seem to be as old-fashioned as earlier technologies can appear to some people nowadays. Today there is a great variety of database technologies that are in use in different organisations for different purposes. There are also a lot of vendors, some of whom have more than one type of database product. I think that it is worthwhile considering both the genesis of databases and some of the major developments that have occurred between then and now.
The infographic appearing at the start of this article seeks to provide just such a perspective. It presents an abridged and simplified perspective on the history of databases from the 1960s to the late 2010s. It is hard to make out the text in the above diagram, so I would recommend that readers click on the link provided in order to view a much larger version with bigger and more legible text.
The infographic references a number of terms. Below I provide links to definitions of several of these, which are taken from The Data and Analytics Dictionary. The list progresses from the top of the diagram downwards, but starts with a definition of “database” itself:
To my mind, it is interesting to see just how long we have been grappling with the best way to set up databases. Also of note is that some of the Big Data technologies are actually relatively venerable, dating to the mid-to-late 2000s (some elements are even older, consisting of techniques for handling flat files on UNIX or Mainframe computers back in the day).
I hope that both the infographic and the definitions provided above contribute to the understanding of the history of databases and also that they help to elucidate the different types of database that are available to organisations today.
Acknowledgements
The following people’s input is acknowledged on the document itself, but my thanks are also repeated here:
Neil Raden (@NeilRaden) of Hired Brains Research both reviewed the infographic and make significant contributions to its contents.
Of course any errors and omissions remain the responsibility of the author.
The title of the discussion thread posted was “Business Intelligence vs. Business Analytics: What’s the Difference?” and the original poster was Jon Dohner from Information Builders. To me the thread topic is something of an old chestnut and takes me back to the heady days of early 2009. Back then, Big Data was maybe a lot more than just a twinkle in Doug Cutting and Mike Cafarella‘s eyes, but it had yet to rise to its current level of media ubiquity.
Nostalgia is not going to be enough for me to start quoting from my various articles of the time[2] and neither am I going to comment on the pros and cons of Information Builders’ toolset. Instead I am more interested in a different turn that discussions took based on some comments posted by Peter Birksmith of Insurance Australia Group.
Peter talked about two streams of work being carried out on the same source data. These are Business Intelligence (BI) and Information Analytics (IA). I’ll let Peter explain more himself:
BI only produces reports based on data sources that have been transformed to the requirements of the Business and loaded into a presentation layer. These reports present KPI’s and Business Metrics as well as paper-centric layouts for consumption. Analysis is done via Cubes and DQ although this analysis is being replaced by IA.
[…]
IA does not produce a traditional report in the BI sense, rather, the reporting is on Trends and predictions based on raw data from the source. The idea in IA is to acquire all data in its raw form and then analysis this data to build the foundation KPI and Metrics but are not the actual Business Metrics (If that makes sense). This information is then passed back to BI to transform and generate the KPI Business report.
I was interested in the dual streams that Peter referred to and, given that I have some experience of insurance organisations and how they work, penned the following reply[3]:
Hi Peter,
I think you are suggesting an organisational and technology framework where the source data bifurcates and goes through two parallel processes and two different “departments”. On one side, there is a more traditional, structured, controlled and rules-based transformation; probably as the result of collaborative efforts of a number of people, maybe majoring on the technical side – let’s call it ETL World. On the other a more fluid, analytical (in the original sense – the adjective is much misused) and less controlled (NB I’m not necessarily using this term pejoratively) transformation; probably with greater emphasis on the skills and insights of individuals (though probably as part of a team) who have specific business knowledge and who are familiar with statistical techniques pertinent to the domain – let’s call this ~ETL World, just to be clear :-).
You seem to be talking about the two of these streams constructively interfering with each other (I have been thinking about X-ray Crystallography recently). So insights and transformations (maybe down to either pseudo-code or even code) from ~ETL World influence and may be adopted wholesale by ETL World.
I would equally assume that, if ETL World‘s denizens are any good at their job, structures, datasets and master data which they create (perhaps early in the process before things get multidimensional) may make work more productive for the ~ETLers. So it should be a collaborative exercise with both groups focused on the same goal of adding value to the organisation.
If I have this right (an assumption I realise) then it all seems very familiar. Given we both have Insurance experience, this sounds like how a good information-focused IT team would interact with Actuarial or Exposure teams. When I have built successful information architectures in insurance, in parallel with delivering robust, reconciled, easy-to-use information to staff in all departments and all levels, I have also created, maintained and extended databases for the use of these more statistically-focused staff (the ~ETLers).
These databases, which tend to be based on raw data have become more useful as structures from the main IT stream (ETL World) have been applied to these detailed repositories. This might include joining key tables so that analysts don’t have to repeat this themselves every time, doing some basic data cleansing, or standardising business entities so that different data can be more easily combined. You are of course right that insights from ~ETL World often influence the direction of ETL World as well. Indeed often such insights will need to move to ETL World (and be produced regularly and in a manner consistent with existing information) before they get deployed to the wider field.
It is sort of like a research team and a development team, but where both “sides” do research and both do development, but in complementary areas (reminiscent of a pair of entangled electrons in a singlet state, each of whose spin is both up and down until they resolve into one up and one down in specific circumstances – sorry again I did say “no more science analogies”). Of course, once more, this only works if there is good collaboration and both ETLers and ~ETLers are focussed on the same corporate objectives.
So I suppose I’m saying that I don’t think – at least in Insurance – that this is a new trend. I can recall working this way as far back as 2000. However, what you describe is not a bad way to work, assuming that the collaboration that I mention is how the teams work.
I am aware that I must have said “collaboration” 20 times – your earlier reference to “silos” does however point to a potential flaw in such arrangements.
I think that the perspective of actuaries having been data scientists long before the latter term emerged is a sound one.
Although the genesis of this thread dates to over five years ago (an aeon in terms of information technology), I think that – in the current world where some aspects of the old divide between technically savvy users[4] and IT staff with strong business knowledge[5] has begun to disappear – there is both an opportunity for businesses and a threat. If silos develop and the skills of a range of different people are not combined effectively, then we have a situation where:
| ETL World | + | ~ETL World | < | ETL World ∪ ~ETL World |
If instead collaboration, transparency and teamwork govern interactions between different sets of people then the equation flips to become:
| ETL World | + | ~ETL World | ≥ | ETL World ∪ ~ETL World |
Perhaps the way that Actuarial and IT departments work together in enlightened insurance companies points the way to a general solution for the organisational dynamics of modern information provision. Maybe also the, by now somewhat venerable, concept of a Business Intelligence Competency Centre, a unified team combining the best and brightest from many fields, is an idea whose time has come.
Notes
[1]
A link to the actual discussion thread is provided here. However You need to be a member of the TDWI Group to view this.
[2]
Anyone interested in ancient history is welcome to take a look at the following articles from a few years back:
“So how come Business Intelligence didn’t predict the World Economic Crisis?”
I have seen countless variants of the above question posted all over the Internet. Mostly it is posed on community forums and can often be a case of someone playing Devil’s Advocate, or simply wanting to stir up a conversation. However I came across reference to this question recently in the supposedly more sober columns of The British Computer Society (now very modishly re branded as BCS – The Chartered Institute for IT). According to the font of all human knowledge, the BCS is:
“a professional body and a learned society that represents those working in Information Technology. Established in 1957, it is the largest United Kingdom-based professional body for computing”
The specific article was entitled Data quality issues ‘to blame for financial crises’ (I’m not sure whether the BCS is saying that data quality issues are responsible for more than one financial crisis, or whether there is a typo in the last word). The use of quotation marks is also apt as the BCS seem to be reliant for the content of this article on both the opinions of the owner of a on-line community and a piece of commercial research finding that:
“more than 75 per cent of top financial services firms are to increase the amount of money they allocate to combating data quality and consistency issues”
and
“a further 44 per cent said clarity of data would be their ‘key focus'”
How this adds up to the conclusion appearing in the title is perhaps something of a mystery. The process is not exactly a shining example of how to turn source data into actionable information.
Lessons from Lehmans
It is arguable (though maybe not on the evidence presented in the BCS article) that poor data quality may have contributed to the demise of say Lehman Brothers. However the following line of argument is a bit of a reach:
Poor data quality [arguably] contributed to the failure of Lehman Brothers
Lehman Brothers’ failure was a trigger for a broader collapse of the world economy
Therefore Lehman’s collapse was solely to blame for the crisis
Thus (as per the BCS): Data quality issues [are] ‘to blame for financial crises’ [sic.]
There are a number of problems with this logic. To address just one, the failure of Lehmans did not cause the recession, it precipitated problems that were much larger, had been building up for years and which would have been triggered by something sooner or later (all balloons either deflate or pop eventually, even if not pierced by a needle).
By way of analogy, thinking that the assassination of Archduke Ferdinand was the sole reason for the outbreak of The Great War would be an over-simplification of history; greater forces were at work. Does a dropped match [proximate cause] lead to a massive forest fire, or are the preceding months of drought [distal cause] more to blame, with the fire an accident waiting to happen?
To most observers the distal causes of the recession were separate bubbles that had built up in a variety of asset classes (e.g. residential property) that were either going to deflate slowly, or go bang! Leverage created by certain classes of financial instruments made a bang more likely, but these instruments themselves did not create the initial problems either.
Extending our earlier analogy, if the asset bubbles were a lack of rain, then maybe the use of financial instruments – such as collateralised debt obligations – was a drying wind. In this scenario, Lehman Brothers was the dropped match, nothing more. If it wasn’t them, it would have been another event. So for causes of the World Economic crisis, we need to look more broadly.
Cui culpa?
Before I explore whether BI should have performed better in predicting the most severe recession since the 1930s, it is perhaps worth asking a more pertinent question, namely, “so how come macroeconomics didn’t predict the World Economic Crisis?” Again according to the font:
macroeconomics is a branch of economics that deals with the performance, structure, behavior and decision-making of the entire economy, be that a national, regional, or the global economy
so surely it should have had something to say in advance about this subject. However at least according to The Economist (who one would assume should know something about the area):
[Certain leading economists] argue that [other] economists missed the origins of the crisis; failed to appreciate its worst symptoms; and cannot now agree about the cure. In other words, economists misread the economy on the way up, misread it on the way down and now mistake the right way out.
On the way up, macroeconomists were not wholly complacent. Many of them thought the housing bubble would pop or the dollar would fall. But they did not expect the financial system to break. Even after the seizure in interbank markets in August 2007, macroeconomists misread the danger. Most were quite sanguine about the prospect of Lehman Brothers going bust in September 2008.
[Note: a subscription to the magazine is required to view this article]
In a later article in the same journal, Robert Lucas, Professor of Economics at the University of Chicago, rebutted the above critique, stating:
One thing we are not going to have, now or ever, is a set of models that forecasts sudden falls in the value of financial assets, like the declines that followed the failure of Lehman Brothers in September. This is nothing new. It has been known for more than 40 years and is one of the main implications of Eugene Fama’s “efficient-market hypothesis”, which states that the price of a financial asset reflects all relevant, generally available information. If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier.
[Note: a subscription to the magazine is required to view this article]
So if economists had at best a mixed track record in predicting the crisis (and can’t seem to agree amongst themselves about the merits of different ways of analysing economies), then it seems to me that Business Intelligence has its work cut out for it. As I put it in an earlier article, The scope of IT’s responsibility when businesses go bad:
My general take is that if the people who were committing organisations to collateralised debt obligations and other even more esoteric asset-backed securities were unable (or unwilling) to understand precisely the nature of the exposure that they were taking on, then how could this be reflected in BI systems. Good BI systems reflect business realities and risk is one of those realities. However if risk is as ill-understood as it appears to have been in many financial organisations, then it is difficult to see how BI (or indeed it’s sister area of business analytics) could have shed light where the layers of cobwebs were so dense.
As an aside, the above-referenced article argues that IT professionals should not try to distance themselves too much from business problems. My basic thesis being that if IT is shy about taking any responsibility in bad times, it should not be surprised when its contributions are under-valued in good ones. However this way lies a more philosophical discussion.
My opinion on why questions about whether or not business intelligence predicted the recession continue to be asked is that they relate to BI being oversold. Oversold in a way that I believe is unhealthy and actually discredits the many benefits of the field.
Crystal Ball Gazing
The above slide is taken from my current deck. My challenge to the audience is to pick the odd-one-out from the list. Assuming that you buy into my Rubik’s Cube analogy for business intelligence, hopefully this is not an overly onerous task.
Business Intelligence is not a crystal ball, Predictive Analytics is not a crystal ball either. They are extremely useful tools – indeed I have argued many times before that BI projects can have the largest payback of any IT project – but they are not universal panaceas.
An inflation prediction from The Bank of England Illustrating the fairly obvious fact that uncertainty increases in proportion to time from now.
Business Intelligence will never warn you of every eventuality – if something is wholly unexpected, how can you design tools to predict it? Statistical models will never give you precise answers to what will happen in the future – a range of outcomes, together with probabilities associated with each is the best you can hope for (see above). Predictive Analytics will not make you prescient, instead it can provide you with useful guidance, so long as you remember it is an prediction, not fact.
However, in most circumstances, the fact that your Swiss Army knife doesn’t have the highly-desirable “tool for removing stones from horses hooves” does not preclude it from fulfilling its more quotidian functions well. The fact that your car can’t do 0-60 mph (0-95 kph, or 0-26 ms-1 if you insist) in less than 4 seconds, does not mean that it is incapable of getting you around town perfectly happily. Tools should be fit-for-purpose, not all-purpose.
Unfortunately, sometimes business intelligence can be presented as capable of achieving the impossible; this is only going to lead to disillusionment with the area and to the real benefits not being seized. Also it is increasingly common for vendors and consultancies to claim that amazing results can be obtained with BI quickly, effortlessly and (most intoxicatingly) with minimum corporate pain. My view is that these claims are essentially bogus. Like most things in life, what you get out of business intelligence is highly connected with what you put it.
If you want some pretty pictures showing some easy to derive figures, then progress in days rather than months is entirely feasible. But if you want useful insights into your organisation’s performance that can lead to informed decision making, then time is required to work out what makes the company tick, how to best measure this to drive action and – a part that is often missed – to provide the necessary business and technical training to allow users to get the best out of tools. Here my experience is that there are few meaningful short-cuts.
Crystallising BI benefits
Adopting a more positive tone, if done well, then I believe that business intelligence can do a lot of great things for organisations. A brief selection of these includes:
Dissect corporate performance in ways that enable underlying drivers to be made more plain (our drop-off in profitability is due to pricing pressures in Subsidiary A and poor retention of mid-sized accounts in Territory B, compounded by a fall in the rate of new business acquisition in Industry Segment C).
Amalgamate data from disparate sources, allowing connections to be made between different, but related, areas (high turnover of staff in our customer services centre has coincided with both increased lead times for shipments and greater incidence of customer complaints)
Give insights as to how customers are behaving and how they react to corporate initiatives (our smaller customers appear to be favouring bundled services, which include Feature W, however there was increased uptake of unbundled Service Z following on from our recently published video extolling its virtues)
Measure the efficacy of business initiatives (was the launch of Product X successful? did our drive to improve service levels lead to better business retention?)
Transparently monitor business unit achievement (Countries P, Q and R are all meeting their sales and profitability targets, howvever Country Q is achieving this with 2 fewer staff per $1m revenue)
Provide indications (not guarantees) of future trends (sales of Service K are down 10% on this time last year and fell on a seasonally-adjusted basis for four of the last six months)
Isolate hard-to-find relations (the biggest correlation with repeat business is the speed with which reported problems are addressed, not the number of problems that occur)
It is worth pointing out that a lot of the above is internally focussed, about the organisation itself and only tangentially related to the external environment in which it is operating. Some companies are successfully blending their internal BI with external market information, either derived from specialist companies, or sometimes from industry associations. However few companies are incorporating macroeconomic trends into their BI systems. Maybe that’s because of the confusion endemic in Economics that was referenced above.
However there is another reason why BI is not really in the business of predicting overall economic trends. In the preceding paragraphs, I have stressed that it takes lot of effort to get BI working well for a company. To have the same degree of benefit for a nation’s economy, you would have to aggregate across thousands of companies and deal with the same sort of inconsistency in data definitions and calculation methodologies that are hard enough to fight within an organisation; but orders of magnitude worse.
Nationwide (let alone global) BI would be a Herculean (and essentially impossible) task. Instead simplifying assumptions have to be made, and such assumptions do not generally lead to high-quality BI implementations; which are typically highly-tuned to the characteristics of individual organisations.
Leverage
There are of course organisations whose general profitability exceptionally depends on broad economic trends. These include the much maligned banks of varying flavours. The unique problem that many of these face is of leverage. While a 1% fall in economic activity might have a 1% impact on the revenues of a manufacturing company (in fact seldom is the relationship so simple), it might have a catastrophic impact on a bank, depending on how their portfolio is structured.
To look at the simplest form of option, which pays out the differential between the market price and a floor of £50. If conditions in the economy drive the share price from £55 to £50, the regular shareholder has lost 9% of their investment; the option holder has lost 100%. So while both the shareholder and option-holder will have an equal chance of experiencing such a price-fall, the impact on them will be radically different (in this case by 91%). Like BI, derivatives are a very useful tool, however they also need to be used appropriately.
Closing thoughts
You will notice an absent of fortune-telling from the above list of BI benefits. As indispensable as I believe good BI is to organisations of all shapes and sizes, if fortune-telling is your desire then my advice is to forswear BI and wait until this lady is next in town…
This week, by way of variation, I present an article on TechRepublic that has led to heated debate on the LinkedIn.com Organizational Change Practitioners group. Today’s featured article is by one of my favourite bloggers, Ilya Bogorad and is entitled, Lessons in Leadership: How to instigate and manage change.
The importance of change management in business intelligence projects and both IT and non-IT projects in general is of course a particular hobby-horse of mine and a subject I have written on extensively (a list of some of my more substantial change-related articles can be viewed here). I have been enormously encouraged by the number of influential IT bloggers who have made this very same connection in the last few months. Two examples are Maureen Clarry writing about BI and change on BeyeNetwork recently (my article about her piece can be read here) and Neil Raden (again on BeyeNetwork) who states:
[…] technology is never a solution to social problems, and interactions between human beings are inherently social. This is why performance management is a very complex discipline, not just the implementation of dashboard or scorecard technology. Luckily, the business community seems to be plugged into this concept in a way they never were in the old context of business intelligence. In this new context, organizations understand that measurement tools only imply remediation and that business intelligence is most often applied merely to inform people, not to catalyze change. In practice, such undertakings almost always lack a change management methodology or portfolio.
You can both read my reflections on Neil’s article and link to it here.
Ilya’s piece is about change in general, but clearly he brings both an IT and business sensibility to his writing. He identifies five main areas to consider:
Do change for a good reason
Set clear goals
Establish responsibilities
Use the right leverage
Measure and adjust
There are enormous volumes of literature about change management available, some academic, some based on practical experience, the best combining elements of both. However it is sometimes useful to distil things down to some easily digestible and memorable elements. In his article, Ilya is effectively playing the role of a University professor teaching a first year class. Of course he pitches his messages at a level appropriate for the audience, but (as may be gauged from his other writings) Ilya’s insights are clearly based on a more substantial foundation of personal knowledge.
When I posted a link to Ilya’s article on the LinkedIn.com Organizational Change Practitioners group, it certainly elicited a large number of interesting responses (74 at the time of publishing this article). These came from a wide range of change professionals who are members. It would not be an overstatement to say that debate became somewhat heated at times. Ilya himself also made an appearance later on in the discussions.
Some of the opinions expressed on this discussion thread are well-aligned with my own experiences in successfully driving change; others were very much at variance to this. What is beyond doubt are two things: more and more people are paying very close attention to change management and realising the pivotal role it has to play in business projects; there is also a rapidly growing body of theory about the subject (some of it informed by practical experience) which will hopefully eventually mature to the degree that parts of it can be useful to a broader audience change practitioners grappling with real business problems.
I have featured Neil Raden’s thoughts quite a few times on this blog. It is always valuable to learn from the perspectives and insights of people like Neil who have been in the industry a long time and to whom there is little new under the sun.
In his latest post, IBM System S: Not for Everyone (which appears on his Intelligent Enterprise blog), Neil raises concerns about some commentators’ expectations of this technology. If business intelligence is seen as having democratised information, then some people appear to feel that System S will do the same for real-time analysis of massive data sets.
While intrigued by the technology and particular opportunities that System S may open up, Neil is sceptical about some of the more eye-catching claims. One of these, quoted in The New York Times, relates to real-time analysis in a hospital context, with IBM’s wizardry potentially alerting medical staff to problems before they get out of hand and maybe even playing a role in diagnosis. On the prospects for this universal panacea becoming reality, Neil adroitly observes:
How many organizations have both the skill and organizational alignment to implement something so complex and controversial?
Neil says that he is less fond of sporting analogies than many bloggers (having recently posted articles relating to cricket, football [soccer], mountain biking and rock climbing, I find myself blushing somewhat at this point), but nevertheless goes on to make a very apposite comparison between professional sportsmen and women and carrying out real-time analysis professionally. Every day sports fans can appreciate the skill, commitment and talent of the professionals, but these people operate on a different plane from mere mortals. With System S Neil suggests that:
The vendor projects the image of Tiger Woods to a bunch of duffers.
I think once again we arrive at the verity that there is no silver bullet in any element of information generation (see my earlier article, Automating the business intelligence process?). Many aspects of the technology used in business intelligence are improving every year and I am sure that there are many wonderful aspects to System S. However, this doubting Thomas is as sceptical as Neil about certain of the suggested benefits of this technology. Hopefully some concrete and useful examples of its benefits will soon replace the current hype and provide bloggers with some more tangible fare to write about.
Neil Raden is founder of Hired Brains, a consulting firm specializing in analytics, business Intelligence and decision management. He is also the co-author of the book “a consulting firm specializing in analytics, business Intelligence and decision management. He is also the co-author of the book Smart (Enough) Systems.
Industry luminary Neil Raden, founder of Hired Brains, has weighed into the ongoing debate about Business Analytics vs Business Intelligence on his Intelligent Enterprise blog. The discussions were spawned by comments made by Jim Davis, Chief Marketing Officer of SAS, at a the recent SAS Global Forum. Neil was in the audience when Jim spoke and both his initial reaction and considered thoughts are worth reading.
Neil Raden is an “industry influencer” – followed by technology providers, consultants and even industry analysts. His skill at devising information assets and decision services from mountains of data is the result of thirty years of intensive work. He is the founder of Hired Brains, a provider of consulting and implementation services in business intelligence and analytics to many Global 2000 companies. He began his career as a casualty actuary with AIG in New York before moving into predictive modeling services, software engineering and consulting, with experience in delivering environments for decision making in fields as diverse as health care to nuclear waste management to cosmetics marketing and many others in between. He is the co-author of the book Smart (Enough) Systems and is widely published in magazines and online media. He can be reached at nraden@hiredbrains.com.
I have to say that BeyeNETWORK is becoming the go to place for intelligent BI insights.
In this recent article, Neil Raden challenges the received wisdom that, if you can measure something, managing it follows as a natural corollary. This is a problem that I have seen in a number of BI implementations. It can be characterised as the Field of Dreams problem, if we build it, they will come!
One way to better align BI provision with the management of an organisation is to make sure that any BI element that you deploy is targeted at answering a specific business question. It is important that answering the question leads to action.
If the reaction to learning that sales in the Philadelphia office are down by 2% is a shrug, then not a lot has been achieved. If instead it is easy to further analyse the drivers behind this (e.g. which part of the sales funnel is suffering from a blockage?, is this a temporary blip, or a trend?, is the phenomenon centred on a specific product, or across the board?, etc.) then we begin to embed the use of information to drive decision-making in the organisation. If this leads to an informed telephone conversation with the Philly branch manager and the creation of an action plan to address the fall-off in sales, then BI is starting to add value. This gets us into the area of Actionable Information that Sarah Burnett writes about.
This is one reason why it is important that business intelligence is considered within a framework of cultural transformation; one of the main themes of this blog.
BeyeNETWORK provides viewers with access to the thought leaders in business intelligence, performance management, business integration, information quality, data warehousing and more.
Neil Raden is an “industry influencer” – followed by technology providers, consultants and even industry analysts. His skill at devising information assets and decision services from mountains of data is the result of thirty years of intensive work. He is the founder of Hired Brains, a provider of consulting and implementation services in business intelligence and analytics to many Global 2000 companies. He began his career as a casualty actuary with AIG in New York before moving into predictive modeling services, software engineering and consulting, with experience in delivering environments for decision making in fields as diverse as health care to nuclear waste management to cosmetics marketing and many others in between. He is the co-author of the book Smart (Enough) Systems and is widely published in magazines and online media. He can be reached at nraden@hiredbrains.com.
You must be logged in to post a comment.