Patterns patterns everywhere – The Sequel

Back in 2010 I posted a piece called Patterns patterns everywhere which used the entry point of various articles on a number of web-sites relating to the, then current, Eyjafjallajokull eruption. I went ont to reference – amongst other phenomena, the weather.

The incomparable Randall Munroe from xkcd.com has just knocked my earlier work into a cocked hat with his (perhaps unsurprisingly) much more laconic observations from last Friday, which are instead inspired by the recent cold snaps in the US:

You see the same pattern all over. Take Detroit--' 'Hold on. Why do you know all these statistics offhand?' 'Oh, um, no idea. I definitely spend my evenings hanging out with friends, and not curating a REALLY NEAT database of temperature statistics. Because, pshh, who would want to do that, right? Also, snowfall records.
Copyright xkcd.com
This image has been rearranged to fit in to the confines of peterjamesthomas.com

 

 

Facebook squares “puzzle”

This blog primarily deals with matters relating to business, technology and change; obviously with a major focus on how information provision overlaps with each of these. However there is the occasional divertimento relating to mathematics, physical science, or that most recent of -ologies, social media.

The following article could claim some connections with both mathematics and social media, but in truth relates to neither. Its focus is instead on irritation, specifically a Facebook meme that displays the death-defying resilience of a horror movie baddie. My particular bête noire relates to the following diagram, which appears on my feed more frequently that adverts for “Facebook singles”:

24 or 25?

It is generally accompanied by some inane text, the following being just one example:

I got into a heated battle with a friend over this… I got 24 she say’s 25. How many squares do you see?

Nice grocer’s apostrophe BTW!

I realise that the objective is probably to encourage people to point out the error in the ways of the original poster; thereby racking up comments. However 24?, 25??, really???, really, really????

Let’s break it down…

24 or 25?

Well there is clearly one big square (a 4×4 one) staring us in the face as shown above. Let’s move on to a marginally less obvious class of squares and work these through in long-hand. The squares in this class are all 3×3 and there are 4 of them as follows:

24 or 25?

1…

24 or 25?

2…

24 or 25?

3…

24 or 25?

4…

Adding the initial 4×4 square, our running total is now 5.

The next class is smaller again, 2×2 squares. The same approach as above works, not all the class members are shown, but readers can hopefully fill in the blanks themselves.

24 or 25?

1…

24 or 25?

2…

Skip a few…

24 or 25?

9…

Adding our previous figure of 5 means our running total is now 14; we are approaching 24 and 25 fast, which one is it going to be?

The next class is the most obvious, the sets of larger 1×1 squares.

24 or 25?

It doesn’t require a genius to note that there are 16 of these. Oh dear, the mid-twenties estimates are not looking so good now.

24 or 25?

Also we shouldn’t forget the two further squares of the same size (each of which is split into smaller ones), one of which is shown in the diagram above.

Our previous total was 14 and now 14 + 16 + 2 = 32.

Finally there is the second set of 1×1 squares, the smaller ones.

24 or 25?

It’s trivial to see that there are 8 of these.

Adding this to the last figure of 32 we get a grand total of 40, slightly above both 24 and 25.

Perhaps the only thing of any note that this rather simple exercise teaches us is the relation to sums of squares, inasmuch as part of the final figure is given by: 1 + 4 + 9 + 16, or 12 + 22 + 32 + 42 = 30. Even this is rather spoiled by introducing the intersecting (and interloping) two squares that are covered last in the above analysis.

Oh well, at least now I never have to comment on this annoying “puzzle” again, which is something.
 

You have to love Google

…well if you used to be a Number Theorist that is.

Google / Fermat

It’s almost enough to make me forgive them for Gmail’s consider including “feature”. Almost!
 

 

Words fail me

Unpredictability defined

No, not a post about England’s rise to be the number one Test Cricket team in the world, that is to come. Instead this very brief article refers to a piece on the BBC that, in turn, cites a paper in Geology entitled A 7000 yr perspective on volcanic ash clouds affecting northern Europe (you will need to have a subscription, or belong to an institution that does to read the full text but the abstract is freely available).

The BBC’s own take on this is summed up in the title of their bulletin, Another giant UK ash cloud ‘unlikely’ in our lifetimes. My fervent hope is that this is lazy, or ill-informed, journalism rather than a true representation of what is in the peer-reviewed journal (perhaps all the main BBC journalists are on holiday and the interns are writing the copy). To state the obvious, in general, the fact that something happens every 56 years does not guarantee that the events are always 56 years apart.

For a more cogent review of predicting volcanic erruptions, see my earlier post, Patterns patterns everywhere.
 

 

Analogies

Disaster Area's chief research accountant has recently been appointed Professor of Neomathematics at the University of Maximegalon, in recognition of both his General and his Special Theories of Disaster Area Tax Returns, in which he proves that the whole fabric of the space- time continuum is not merely curved, it is in fact totally bent.

Note: In the following I have used the abridgement Maths when referring to Mathematics, I appreciate that this may be jarring to US readers, omitting the ‘s’ is jarring to me, so please accept my apologies in advance.

Introduction

Regular readers of this blog will be aware of my penchant for analogies. Dominant amongst these have been sporting ones, which have formed a major part of articles such as:

Rock climbing: Perseverance
A bad workman blames his [BI] tools
Running before you can walk
Feasibility studies continued…
Incremental Progress and Rock Climbing
Cricket: Accuracy
The Big Picture
Mountain Biking: Mountain Biking and Systems Integration
Football (Soccer): “Big vs. Small BI” by Ann All at IT Business Edge

I have also used other types of analogy from time to time, notably scientific ones such as in the middle sections of Recipes for Success?, or A Single Version of the Truth? – I was clearly feeling quizzical when I wrote both of those pieces! Sometimes these analogies have been buried in illustrations rather than the text as in:

Synthesis RNA Polymerase transcribing DNA to produce RNA in the first step of protein synthesis
The Business Intelligence / Data Quality symbiosis A mitochondria, the possible product of endosymbiosis of proteobacteria and eukaryots
New Adventures in Wi-Fi – Track 2: Twitter Paul Dirac, the greatest British Physicist since Newton

On other occasions I have posted overtly Mathematical articles such as Patterns, patterns everywhere, The triangle paradox and the final segment of my recently posted trilogy Using historical data to justify BI investments.

Jim Harris' OCDQ Blog

Jim Harris (@ocdqblog) frequently employs analogies on his excellent Obsessive Compulsive Data Quality blog. If there is a way to form a title “The X of Data Quality”, and relate this in a meaningful way back to his area of expertise, Jim’s creative brain will find it. So it is encouraging to feel that I am not alone in adopting this approach. Indeed I see analogies employed increasingly frequently in business and technology blogs, to say nothing of in day-to-day business life.

However, recently two things have given me pause for thought. The first was the edition of Randall Munroe’s highly addictive webcomic, xkcd.com, that appeared on 6th May 2011, entitled “Teaching Physics”. The second was a blog article I read which likened a highly abstract research topic in one branch of Theoretical Physics to what BI practitioners do in their day job.

An homage to xkcd.com

Let’s consider xkcd.com first. Anyone who finds some nuggets of interest in the type of – generally rather oblique – references to matters Mathematical or Scientific that I mention above is likely to fall in love with xkcd.com. Indeed anyone who did a numerate degree, works in a technical role, or is simply interested in Mathematics, Science or Engineering would as well – as Randall says in a footnote:

“this comic occasionally contains […] advanced mathematics (which may be unsuitable for liberal-arts majors)”

Although Randall’s main aim is to entertain – something he manages to excel at – his posts can also be thought-provoking, bitter-sweet and even resonate with quite profound experiences and emotions. Who would have thought that some stick figures could achieve all that? It is perhaps indicative of the range of topics dealt with on xkcd.com that I have used it to illustrate no fewer than seven of my articles (including this one, a full list appears at the end of the article). It is encouraging that Randall’s team of corporate lawyers has generally viewed my requests to republish his work favourably.

The example of Randall’s work that I wanted to focus on is as follows.

Space-time is like some simple and familiar system which is both intuitively understandable and precisely analogous, and if I were Richard Feynman I’d be able to come up with it.
© xkcd.com (adapted from the original to fit the dimensions of this page)

It is worth noting that often the funniest / most challenging xkcd.com observations appear in the mouse-over text of comic strips (alt or title text for any HTML heads out there – assuming that there are any of us left). I’ll reproduce this below as it is pertinent to the discussion:

Space-time is like some simple and familiar system which is both intuitively understandable and precisely analogous, and if I were Richard Feynman I’d be able to come up with it.

If anyone needs some background on the science referred to then have a skim of this article if you need some background on the scientist mentioned (who has also made an appearance on peterjamesthomas.com in Presenting in Public) then glance through this second one.

Here comes the Science…

Randall points out the dangers of over-extending an analogy. While it has always helped me to employ the rubber-sheet analogy of warped space-time when thinking about the area, it is rather tough (for most people) to extrapolate a 2D surface being warped to a 4D hyperspace experiencing the same thing. As an erstwhile Mathematician, I find it easy enough to cope with the following generalisation:

S(1) = The set of all points defined by one variable (x1)
– i.e. a straight line
S(2) = The set of all points defined by two variables (x1, x2)
– i.e. a plane
S(3) = The set of all points defined by three variables (x1, x2, x3)
– i.e. “normal” 3-space
S(4) = The set of all points defined by four variables (x1, x2, x3, x4)
– i.e. 4-space
” ” ” “
S(n) = The set of all points defined by n variables (x1, x2, … , xn)
– i.e. n-space

As we increase the dimensions, the Maths continues to work and you can do calculations in n-space (e.g. to determine the distance between two points) just as easily (OK with some more arithmetic) as in 3-space; Pythagoras still holds true. However, actually visualising say 7-space might be rather taxing for even a Field’s Medallist or Nobel-winning Physicist.

… and the Maths

More importantly while you can – for example – use 3-space as an analogue for some aspects of 4-space, there are also major differences. To pick on just one area, some pieces of string that are irretrievably knotted in 3-space can be untangled with ease in 4-space.

To briefly reference a probably familiar example, starting with 2-space we can look at what is clearly a family of related objects:

2-space: A square has 4 vertexes, 4 edges joining them and 4 “faces” (each consisting of a line – so the same as edges in this case)
3-space: A cube has 8 vertexes, 12 edges and 6 “faces” (each consisting of a square)
4-space: A tesseract (or 4-hypercube) has 16 vertexes, 32 edges and 8 “faces” (each consisting of a cube)
Note: The reason that faces appears in inverted commas is that the physical meaning changes, only in 3-space does this have the normal connotation of a surface with two dimensions. Instead of faces, one would normally talk about the bounding cubes of a tesseract forming its cells.

Even without any particular insight into multidimensional geometry, it is not hard to see from the way that the numbers stack up that:

n-space: An n-hypercube has 2n vertexes, 2n-1n edges and 2n “faces” (each consisting of an (n-1)-hypercube)

Again, while the Maths is compelling, it is pretty hard to visualise a tesseract. If you think that a drawing of a cube, is an attempt to render a 3D object on a 2D surface, then a picture of a tesseract would be a projection of a projection. The French (with a proud history of Mathematics) came up with a solution, just do one projection by building a 3D “picture” of a tesseract.

La Grande Arche de la Défense

As aside it could be noted that the above photograph is of course a 2D projection of a 3D building, which is in turn a projection of a 4D shape; however recursion can sometimes be pushed too far!

Drawing multidimensional objects in 2D, or even building them in 3D, is perhaps a bit like employing an analogy (this sentence being of course a meta-analogy). You may get some shadowy sense of what the true object is like in n-space, but the projection can also mask essential features, or even mislead. For some things, this shadowy sense may be more than good enough and even allow you to better understand the more complex reality. However, a 2D projection will not be good enough (indeed cannot be good enough) to help you understand all properties of the 3D, let alone the 4D. Hopefully, I have used one element of the very subject matter that Randall raises in his webcomic to further bolster what I believe are a few of the general points that he is making, namely:

  1. Analogies only work to a degree and you over-extend them at your peril
  2. Sometimes the wholly understandable desire to make a complex subject accessible by comparing it to something simpler can confuse rather than illuminate
  3. There are subject areas that very manfully resist any attempts to approach them in a manner other than doing the hard yards – not everything is like something less complex

Why BI is not [always] like Theoretical Physics

Hand with reflecting sphere - Maurits Cornelis Escher (1935). This is your only clue.

Having hopefully supported these points, I’ll move on to the second thing that I mentioned reading; a BI-related blog also referencing Theoretical Physics. I am not going to name the author, mention where I read their piece, state what the title was, or even cite the precise area of Physics they referred to. If you are really that interested, I’m sure that the nice people at Google can help to assuage your curiosity. With that out of the way, what were the concerns that reading this piece raised in my mind?

Well first of all, from the above discussion (and indeed the general tone of this blog), you might think that such an article would be right up my street. Sadly I came away feeling that the connection made was, tenuous at best, rather unhelpful (it didn’t really tell you anything about Business Intelligence) and also exhibited a lack of anything bar a superficial understanding of the scientific theory involved.

The analogy had been drawn based on a single word which is used in both some emerging (but as yet unvalidated) hypotheses in Theoretical Physics and in Business Intelligence. While, just like the 2D projection of a 4D shape, there are some elements in common between the two, there are some fundamental differences. This is a general problem in Science and Mathematics, everyday words are used because they have some connection with the concept in hand, but this does not always imply as close a relationship as the casual reader might infer. Some examples:

  1. In Pure Mathematics, the members of a group may be associative, but this doesn’t mean that they tend to hang out together.
  2. In Particle Physics, an object may have spin, but this does not mean that it has been bowled by Murali
  3. In Structural Biology, a residue is not precisely what a Chemist might mean by one, let alone a lay-person

Part of the blame for what was, in my opinion, an erroneous connection between things that are not actually that similar lies with something that, in general, I view more positively; the popular science book. The author of the BI/Physics blog post referred to just such a tome in making his argument. I have consumed many of these books myself and I find them an interesting window into areas in which I do not have a background. The danger with them lies when – in an attempt to convey meaning that is only truly embodied (if that is the word) in Mathematical equations – our good friend the analogy is employed again. When done well, this can be very powerful and provide real insight for the non-expert reader (often the writers of pop-science books are better at this kind of thing than the scientists themselves). When done less well, this can do more than fail to illuminate, it can confuse, or even in some circumstances leave people with the wrong impression.

Tridimensional realisation of the Riemann Zeta function
© Jean-François Colonna

During my MSc, I spent a year studying the Riemann Hypothesis and the myriad of results that are built on the (unproven) assumption that it is true. Before this I had spent three years obtaining a Mathematics BSc. Before this I had taken two Maths A-levels (national exams taken in the UK during and at the end of what would equate to High School in the US), plus (less relevantly perhaps) Physics and Chemistry. One way or another I had been studying Maths for probably 15 plus years before I encountered this most famous and important of ideas.

So what is the Riemann Hypotheis? A statement of it is as follows:

The real part of all non-trivial zeros of the Riemann Zeta function is equal to one half

There! Are you any the wiser? If I wanted to explain this statement to those who have not studied Pure Mathematics at a graduate level, how would I go about it? Maybe my abilities to think laterally and be creative are not well-developed, but I struggle to think of an easily accessible way to rephrase the proposal. I could say something gnomic such as, “it is to do with the distribution of prime numbers” (while trying to avoid the heresy of adding that prime numbers are important because of cryptography – I believe that they are important because they are prime numbers!).

I spent a humble year studying this area, after years of preparation. Some of the finest Mathematical minds of the last century (sadly not a set of which I am a member) have spent vast chunks of their careers trying to inch towards a proof. The Riemann Hypothesis is not like something from normal experience; it is complicated. Some things are complicated and not easily susceptible to analogy.

Equally – despite how interesting, stimulating, rewarding and even important Business Intelligence can be – it is not Theoretical Physics and n’er the twain shall meet.

And so what?

So after this typically elliptical journey through various parts of Science and Mathematics, what have I learnt? Mainly that analogies must be treated with care and not over-extended lest they collapse in a heap. Will I therefore stop filling these pages with BI-related analogies, both textual and visual? Probably not, but maybe I’ll think twice before hitting the publish key in future!

Euler's product formula for the Riemann Zeta function


Chronological list of articles using xkcd.com illustrations:

  1. A single version of the truth?
  2. Especially for all Business Analytics professionals out there
  3. New Adventures in Wi-Fi – Track 1: Blogging
  4. Business logic [My adaptation]
  5. New Adventures in Wi-Fi – Track 2: Twitter
  6. Using historical data to justify BI investments – Part III

 

Using historical data to justify BI investments – Part III

The earliest recorded surd

This article completes the three-part series which started with Using historical data to justify BI investments – Part I and continued (somewhat inevitably) with Using historical data to justify BI investments – Part II. Having presented a worked example, which focused on using historical data both to develop a profit-enhancing rule and then to test its efficacy, this final section considers the implications for justifying Business Intelligence / Data Warehouse programmes and touches on some more general issues.
 
 
The Business Intelligence angle

In my experience when talking to people about the example I have just shared, there can be an initial “so what?” reaction. It can maybe seem that we have simply adopted the all-too-frequently-employed business ruse of accentuating the good and down-playing the bad. Who has not heard colleagues say “this was a great month excluding the impact of X, Y and Z”? Of course the implication is that when you include X, Y and Z, it would probably be a much less great month; but this is not what we have done.

One goal of business intelligence is to help in estimating what is likely to happen in the future and guiding users in taking decisions today that will influence this. What we have really done in the above example is as follows:

Look out Morlocks, here I come... [alumni of Imperial College London are so creative aren't they?]

  1. shift “now” back two years in time
  2. pretend we know nothing about what has happened in these most recent two years
  3. develop a predictive rule based solely on the three years preceding our back-shifted “now”
  4. then use the most recent two years (the ones we have metaphorically been covering with our hand) to see whether our proposed rule would have been efficacious

For the avoidance of doubt, in the previously attached example, the losses incurred in 2009 – 2010 have absolutely no influence on the rule we adopt, this is based solely on 2006 – 2008 losses. All the 2009 – 2010 losses are used for is to validate our rule.

We have therefore achieved two things:

  1. Established that better decisions could have been taken historically at the juncture of 2008 and 2009
  2. Devised a rule that would have been more effective and displayed at least some indication that this could work going forward in 2011 and beyond

From a Business Intelligence / Data Warehousing perspective, the general pitch is then something like:

Eight out of ten cats said that their owners got rid of stubborn stains no other technology could shift with BI - now with added BA

  1. if we can mechanically take such decisions, based on a very non-sophisticated analysis of data, then if we make even simple information available to the humans taking decisions (i.e. basic BI), then surely the quality of their decision-making will improve
  2. If we go beyond this to provide more sophisticated analyses (e.g. including industry segmentation, analysis of insured attributes, specific products sold etc., i.e. regular BI) then we can – by extrapolation from the example – better shape the evolution of the performance of whole books of business
  3. We can also monitor the decisions taken to determine the relative effectiveness of individuals and teams and compare these to their peers – ideally these comparisons would also be made available to the individuals and teams themselves, allowing them to assess their relative performance (again regular BI)
  4. Finally, we can also use more sophisticated approaches, such as statistical modelling to tease out trends and artefacts that would not be easily apparent when using a standard numeric or graphical approach (i.e. sophisticated BI, though others might use the terms “data mining”, “pattern recognition” or the now ubiquitous marketing term “analytics”)

The example also says something else – although we may already have reporting tools, analysis capabilities and even people dabbling in statistical modelling, it appears that there is room for improvement in our approach. The 2009 – 2010 loss ratio was 54% and it could have been closer to 40%. Thus what we are doing now is demonstrably not as good as it could be and the monetary value of making a stepped change in information capabilities can be estimated.

The generation of which should be the object of any BI/DW project worth its salt - thinking of which, maybe a mound of salt would also have worked as an illustration

In the example, we are talking about £1m of biannual premium and £88k of increased profit. What would be the impact of better information on an annual book of £1bn premium? Assuming a linear relationship and using some advanced Mathematics, we might suggest £44m. What is more, these gains would not be one-off, but repeatable every year. Even if we moderate our projected payback to a more conservative figure, our exercise implies that we would be not out of line to suggest say an ongoing annual payback of £10m. These are numbers and concepts which are likely to resonate with Executive decision-makers.

To put it even more directly an increase of £10m a year in profits would quickly swamp the cost of a BI/DW programme in very substantial benefits. These are payback ratios that most IT managers can only dream of.

As an aside, it may have occurred to readers that the mechanistic rule is actually rather good and – if so – why exactly do we need the underwriters? Taking to one side examples of solely rule-based decision-making going somewhat awry (LTCM anyone?) the human angle is often necessary in messy things like business acquisition and maintaining relationships. Maybe because of this, very few insurance organisations are relying on rules to take all decisions. However it is increasingly common for rules to play some role in their overall approach. This is likely to take the form of triage of some sort. For example:

  1. A rule – maybe not much more sophisticated than the one I describe above – is established and run over policies before renewal.
  2. This is used to score polices as maybe having green, amber or red lights associated with them.
  3. Green policies may be automatically renewed with no intervention from human staff
  4. Amber polices may be looked at by junior staff, who may either OK the renewal if they satisfy themselves that the issues picked up are minor, or refer it to more senior and experienced colleagues if they remain concerned
  5. Red policies go straight to the most experienced staff for their close attention

In this way process efficiencies are gained. Staff time is only applied where it is necessary and the most expensive resources are applied to those cases that most merit their abilities.

 
Correlation

From the webcomic of the inimitable Randall Munroe - his mouse-over text is a lot better than mine BTW
© xkcd.com

Let’s pause for a moment and consider the Insurance example a little more closely. What has actually happened? Well we seem to have established that performance of policies in 2006 – 2008 is at least a reasonable predictor of performance of the same policies in 2009 – 2010. Taking the mutual fund vendors’ constant reminder that past performance does not indicate future performance to one side, what does this actually mean?

What we have done is to establish a loose correlation between 2006 – 2008 and 2009 – 2010 loss ratios. But I also mentioned a while back that I had fabricated the figures, so how does that work? In the same section, I also said that the figures contained an intentional bias. I didn’t adjust my figures to make the year-on-year comparison work out. However, at the policy level, I was guilty of making the numbers look like the type of results that I have seen with real policies (albeit of a specific type). Hopefully I was reasonably realistic about this. If every policy that was bad in 2006 – 2008 continued in exactly the same vein in 2009 – 2010 (and vice versa) then my good segment would have dropped from an overall loss ratio of 54% to considerably less than 40%. The actual distribution of losses is representative of real Insurance portfolios that I have analysed. It is worth noting that only a small bias towards policies that start bad continuing to be bad is enough for our rule to work and profits to be improved. Close scrutiny of the list of policies will reveal that I intentionally introduced several counter-examples to our rule; good business going bad and vice versa. This is just as it would be in a real book of business.

Not strongly correlated

Rather than continuing to justify my methodology, I’ll make two statements:

  1. I have carried out the above sort of analysis on multiple books of Insurance business and come up with comparable results; sometimes the implied benefit is greater, sometimes it is less, but it has been there without exception (of course statistics being what it is, if I did the analysis frequently enough I would find just such an exception!).
  2. More mathematically speaking, the actual figure for the correlation between the two sets of years is a less than stellar 0.44. Of course a figure of 1 (or indeed -1) would imply total correlation, and one of 0 would imply a complete lack of correlation, so I am not working with doctored figures. Even a very mild correlation in data sets (one much less than the threshold for establishing statistical dependence) can still yield a significant impact on profit.

 
Closing thoughts

Ground floor: Perfumery, Stationery and leather goods, Wigs and haberdashery, Kitchenware and food…. Going up!

Having gone into a lot of detail over the course of these three articles, I wanted to step back and assess what we have covered. Although the worked-example was drawn from my experience in Insurance, there are some generic learnings to be made.

Broadly I hope that I have shown that – at least in Insurance, but I would argue with wider applicability – it is possible to use the past to infer what actions we should take in the future. By a slight tweak of timeframes, we can even take some steps to validate approaches suggested by our information. It is important that we remember that the type of basic analysis I have carried out is not guaranteed to work. The same can be said of the most advanced statistical models; both will give you some indication of what may happen and how likely this is to occur, but neither of them is foolproof. However, either of these approaches has more chance of being valuable than, for example, solely applying instinct, or making decisions at random.

In Patterns, patterns everywhere, I wrote about the dangers associated with making predictions about events are essentially unpredictable. This is another caveat to be born in mind. However, to balance this it is worth reiterating that even partial correlation can lead to establishing rules (or more sophisticated models) that can have a very positive impact.

While any approach based on analysis or statistics will have challenges and need careful treatment, I hope that my example shows that the option of doing nothing, of continuing to do things how they have been done before, is often fraught with even more problems. In the case of Insurance at least – and I suspect in many other industries – the risks associated with using historical data to make predictions about the future are, in my opinion, outweighed by the risks of not doing this; on average of course!

But then 1=2 for very large values of 1
 

A quantised approach to formal group interactions of hominidae (size > 2)

Much as he liked to debunk the field, I always thought that photon paths in Feynman diagrams presaged elements of String Theory

I am very excited to be able to report that I have taken a major step forward in expanding our understanding of the universe. The paper has been lodged on arXiv.org and it is only a matter of time before the Nobel Committee gets round to calling.

I have established that the fundamental element of time (at least between 9am and 5pm Monday to Friday) can exist only in pre-determined, discrete quantities. Furthermore I have shown that these also have a minimum value, with all other quantities being multiples of this. More conventional (aka hidebound) researchers would slavishly adhere to established, but outmoded, protocol and allocate this minimum quantity the number 1. I have been braver and less mentally constrained than my more quotidian colleagues in deciding to associate the number ½ with this quantity. My reasoning for this is that while quantum numbers of 1, 2, 3 and so on are regularly observed, those consisting of an odd multiple (n > 1) of the minimum value are rarer that free lunches.

However, I have left the most exciting finding until last. I have rigorously calculated the value of the initial quantum number. My work determines beyond any doubt that this is 1.8 x 109 µs (p < 0.003). I have modestly called this fundamental building block of nature Peter’s Constant and – as is customary – selected an appropriate Greek letter to represent it. The first letter of my name is ‘P’ and the Greek letter for ‘P’ is π, so I have naturally adopted this.

I believe that there may be some other antiquated use for this letter, but am confident that the importance of my discoveries are such that π will soon come to be associated only with its more relevant (albeit slightly newer) meaning and justice will have been seen to be done.

Congratulatory telegrams, bouquets of flowers and magna of Champagne (one of my hobbies is Latin plurals and Bollinger would be nice) may be sent to the normal address.
 


 
With acknowledgements to S. L. Cooper PhD – Department of Theoretical Physics, California Institute of Technology, Pasadena, CA 91125, USA – without whose inspiration this work would not have been possible.

The triangle paradox – solved

When I posted The triangle paradox, I said that I would post a solution in few days. As per the comments on my earlier article, some via Twitter and indeed the context of the article in which this supposed mathematical conundrum was posted, the heart of the matter is an optical illusion.

If we consider just the first part of the paradox:

More than meets the eyes

Then the key is in realising that the red and green triangles are not similar (in the geometric sense of the word). In particular the left hand angles are not the same, thus when lined-up they do not form the hypotenuse of the larger, compound triangle that our eyes see. In the example above, the line tracing the red and green triangles dips below what would be the hypotenuse of the big triangle. In the rearranged version, it bulges above. This is where the extra white square comes from.

It is probably easier to see this diagrammatically. The following figure has been distorted to make things easier to understand:

Dimensions exaggerated

Let’s start with my point about the triangles not being similar:

EAB = tan-1(2/5) ≈ 21.8°

FAC = tan-1(3/8) ≈ 20.6°

So the two triangles are not similar and, as stated above, the two arrangements don’t quite line up to form the big triangle shown in the paradox. There is a “gap” between them formed by the grey parallelogram above, whose size has been exaggerated. This difference gets lost in the thickness of the lines and also our eyes just assume that the two arrangements form the same big triangle.

To work out the area of the parallelogram:

AE = (22 + 52)½ = √29
EI = (32 + 82)½ = √73
AI = (52 + 132)½ = √194

The area of a triangle with sides a, b and c is given by:

Area of triangle

Sparing you the arithmetic, when you substritute the values for AE, EI and AI in the above equation, the area of ∆ AEI is precisely ½.

∆ AEI and ∆ AFI are clearly identical, so the area of parallelogram AEIF is twice the area of either is

2 x ½ = 1

This is where the “missing” square comes from.
 


 
As was pointed out in a comment on the original post, the above should form something of a warning to those who place wholly uncritical faith in data visualisation. Much like statistics, while this is a powerful tool in the hands of the expert, it can mislead if used without due care and attention.
 

The triangle paradox

This seems to be turning into Mathematics week at peterjamesthomas.com. The “paradox” shown in the latter part of this article was presented to the author and some of his work colleagues at a recent seminar. It kept company with some well-know trompe l’œil such as:

Old or young woman?

and

Quadruped?

and

Parallel lines?

However the final item presented was rather more worrying as it seemed to be less related to the human eye’s (or perhaps more accurately the human brain’s) ability to discern shape from minimal cues and more to do with mathematical fallacy. The person presenting these images (actually they were slightly different ones, I have simplified the problem) claimed that they themselves had no idea about the solution.

Consider the following two triangles:

Spot the difference...

The upper one has been decomposed into two smaller triangles – one red, one green – a blue rectangle and a series of purple squares.

These shapes have then been rearranged to form the lower triangle. But something is going wrong here. Where has the additional white square come from?

Without even making recourse to Gödel, surely this result stabs at the heart of Mathematics. What is going on?

After a bit of thought and going down at least one blind alley, I managed to work this one out (and thereby save Mathematics single-handedly). I’ll publish the solution in a later article. Until then, any suggestions are welcome.
 


 
For those who don’t want to think about this too much, the solution has now been posted here.
 

Medical malpractice

8 plus 7 equals 15, carry one, er...

I was listening to a discussion with two medical practitioners on the radio today while driving home from work. I’ll remove the context of the diseases they were debating as the point I want to make is not specifically to do with this aspect and dropping it removes a degree of emotion from the conversation. The bone of contention between the two antagonists was the mortality rate from a certain set of diseases in the UK and whether this was to do with the competency of general practitioners (GPs, or “family doctors” for any US readers) and the diagnostic procedures they use, or to do with some other factor.

In defending her colleagues from the accusations of the first interviewee, the general practitioner said that the rate of mortality for sufferers of these diseases in other European countries (she specifically cited Belgium and France) was greater than in the UK. I should probably pause at this point to note that this comment seemed the complete opposite of every other European health survey I have read in recent years, but we will let that pass and instead focus on the second part of her argument. This was that that better diagnoses would be made if the UK hired more doctors (like her), thereby allowing them to spend more time with each patient. She backed up this assertion by then saying that France has many more doctors per 1,000 people than the UK (the figures I found were 3.7 per 1,000 for France and 2.2 per 1,000 for the UK; these were totally different to the figures she quoted, but again I’ll let that pass as she did seem to at least have the relation between the figures in each country the right way round this time).

What the GP seemed to be saying is summarised in the following chart:

Vive la difference

I have no background in medicine, but to me the lady in question made the opposite point to the one she seemed to want to. If there are fewer doctors per capita in the UK than in France, but UK mortality rates are better, it might be more plausible to argue that less doctors implies better survival rates; this is what the above chart suggests. Of course this assertion is open to challenge and – as with most statistical phenomena – there are undoubtedly many other factors. There is also of course the old chestnut of correlation not implying causality (not that the above chart even establishes correlation). However, at the very least, the “facts” as presented did not seem to be a prima facie case for hiring more UK doctors.

Sadly for both the GP in question and for inhabitants of the UK, I think that the actual graph is more like:

This exhibit could perhaps suggest that the second doctor had a potential point, but such simplistic observations, much as we may love to make them, do not always stand up to rigorous statistical analysis. Statistical findings can be as counter-intuitive as many other mathematical results.

Speaking of statistics, when challenged on whether she had the relative mortality rates for France and the UK the right way round, the same GP said, “well you can prove anything with statistics.” We hear this phrase so often that I guess many of us come to believe it. In fact it might be more accurate to say, “selection bias is all pervasive”, or perhaps even “innumeracy will generally lead to erroneous conclusions being drawn.”

When physicians are happy to appear on national radio and exhibit what is at best a tenuous grasp of figures, one can but wonder about the risk of numerically-based medical decisions sometimes going awry. With doctors also increasingly involved in public affairs (either as expert advisers or – in the UK at least – often as members of parliament), perhaps these worries should also be extended into areas of policy making.

Even more fundamentally (but then as an ex-Mathematician I would say this), perhaps the UK needs to reassess how it teaches mathematics. Also maybe UK medical schools need to examine numeric proficiency again just before students graduate as well as many years earlier when candidates apply; just in case something in the process of producing new doctors has squeezed their previous mathematical ability out of them.

Before I begin to be seen as an opponent of the medical profession, I should close by asking a couple of questions that are perhaps closer to home for some readers. How many of the business decisions that are taken using information lovingly crafted by information professionals such as you and me are marred by an incomplete understanding of numbers on the part of [hopefully] a small subsection of users? As IT professionals, what should we be doing to minimise the likelihood of such an occurrence in our organisations?