How to be Surprisingly Popular

Popular with the Crowd
 
Introduction

This article is about the wisdom of the crowd [1], or more particularly its all too frequent foolishness. I am going to draw on a paper recently published in Nature by a cross-disciplinary team from the Massachusetts Institute of Technology and Princeton University. The authors are Dražen Prelec, H. Sebastian Seung and John McCoy. The paper’s title is A solution to the single-question crowd wisdom problem [2]. Rather than reinvent the wheel, here is a section from the abstract (with my emphasis):

Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. […] However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared.

 
 
The Problems

The authors describe some compelling examples of where a crowd-based approach ignores the aforementioned specialised knowledge. I’ll cover a couple of these in a second, but let me first add my own.

How heavy is a proton?

Suppose we ask 1,000 people to come up with an estimate of the mass of a proton. One of these people happens to have won the Nobel Prize for Physics the previous year. Is the average of the estimates provided by the 1,000 people likely to be more accurate, or is the estimate of the one particularly qualified person going to be superior? There is an obvious answer to this question [3].

Lest it be thought that the above flaw in the wisdom of the crowd is confined to populations including a Nobel Laureate, I’ll reproduce a much more quotidian example from the Nature paper [4].

Philadelphia or Harrisburg?

[..] imagine that you have no knowledge of US geography and are confronted with questions such as: Philadelphia is the capital of Pennsylvania, yes or no? And, Columbia is the capital of South Carolina, yes or no? You pose them to many people, hoping that majority opinion will be correct. [in an actual exercise the team carried out] this works for the Columbia question, but most people endorse the incorrect answer (yes) for the Philadelphia question. Most respondents may only recall that Philadelphia is a large, historically significant city in Pennsylvania, and conclude that it is the capital. The minority who vote no probably possess an additional piece of evidence, that the capital is Harrisburg. A large panel will surely include such individuals. The failure of majority opinion cannot be blamed on an uninformed panel or flawed reasoning, but represents a defect in the voting method itself.

I’m both a good and bad example here. I know the capital of Pennsylvania is Harrisburg because I have specialist knowledge [5]. However my acquaintance with South Carolina is close to zero. I’d therefore get the first question right and have a 50 / 50 chance on the second (all other things being equal of course). My assumption is that Columbia is, in general, much more well-known than Harrisburg for some reason.

Confidence Levels

The authors go on to cover the technique that is often used to try to address this type of problem in surveys. Respondents are also asked how confident they are about their answer. Thus a tentative “yes” carries less weight than a definitive “yes”. However, as the authors point out, such an approach only works if correct responses are strongly correlated with respondent confidence. As is all too evident from real life, people are often both wrong and very confident about their opinion [6]. The authors extended their Philadelphia / Columbia study to apply confidence weightings, but with no discernible improvement.
 
 
A Surprisingly Popular Solution

As well as identifying the problem, the authors suggest a solution and later go on to demonstrate its efficacy. Again quoting from the paper’s abstract:

Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions.

Let’s use the examples of capitals of states again here (as the authors do in the paper). As well as asking respondents, “Philadelphia is the capital of Pennsylvania, yes or no?” you also ask them “What percentage of people in this survey will answer ‘yes’ to this question?” The key is then to compare the actual survey answers with the predicted survey answers.

Columbia and Philadelphia [click to view a larger version in a new tab]

As shown in the above exhibit, in the authors’ study, when people were asked whether or not Columbia is the capital of South Carolina, those who replied “yes” felt that the majority of respondents would agree with them. Those who replied “no” symmetrically felt that the majority of people would also reply “no”. So no surprises there. Both groups felt that the crowd would agree with their response.

However, in the case of whether or not Philadelphia is the capital of Pennsylvania there is a difference. While those who replied “yes” also felt that the majority of people would agree with them, amongst those who replied “no”, there was a belief that the majority of people surveyed would reply “yes”. This is a surprise. People who make the correct response to this question feel that the wisdom of the crowd will be incorrect.

In the Columbia example, what people predict will be the percentage of people replying “yes” tracks with the actual response rate. In the Philadelphia example, what people predict will be the percentage of people replying “yes” is significantly less than the actual proportion of people making this response [7]. Thus a response of “no” to “Philadelphia is the capital of Pennsylvania, yes or no?” is surprisingly popular. The methodology that the authors advocate would then lead to the surprisingly popular answer (i.e. “no”) actually being correct; as indeed it is. Because there is no surprisingly popular answer in the Columbia example, then the result of a democratic vote stands; which is again correct.

To reiterate: a surprisingly popular response will overturn the democratic verdict, if there is no surprisingly popular response, the democratic verdict is unmodified.

Discriminating about Art

As well as confirming the superiority of the surprisingly popular approach (as opposed to either weighted or non-weighted democratic votes) with questions about state capitals, the authors went on to apply their new technique in a range of other areas [8].

  • Study 1 used 50 US state capitals questions, repeating the format [described above] with different populations [9].
     
  • Study 2 employed 80 general knowledge questions.
     
  • Study 3 asked professional dermatologists to diagnose 80 skin lesion images as benign or malignant.
     
  • Study 4 presented 90 20th century artworks [see the images above] to laypeople and art professionals, and asked them to predict the correct market price category.

Taking all responses across the four studies into account [10], the central findings were as follows [11]:

We first test pairwise accuracies of four algorithms: majority vote, surprisingly popular (SP), confidence-weighted vote, and max. confidence, which selects the answer endorsed with highest average confidence.

  • Across all items, the SP algorithm reduced errors by 21.3% relative to simple majority vote (P < 0.0005 by two-sided matched-pair sign test).
     
  • Across the items on which confidence was measured, the reduction was:
    • 35.8% relative to majority vote (P < 0.001),
    • 24.2% relative to confidence-weighted vote (P = 0.0107) and
    • 22.2% relative to max. confidence (P < 0.13).

The authors go on to further kick the tyres [12] on these results [13] without drawing any conclusions that deviate considerably from the ones they first present and which are reproduced above. The surprising finding is that the surprisingly popular algorithm significantly out-performs the algorithms normally used in wisdom of the crowd polling. This is a major result, in theory at least.
 
 
Some Thoughts

Tools and Toolbox

At the end of the abstract, the authors state that:

Like traditional voting, [the surprisingly popular algorithm] accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning […].

Given the – justified – attention that has been given to machine learning in recent years, this is a particularly interesting claim. More broadly, SP seems to bring much needed nuance to the wisdom of the crowd. It recognises that the crowd may often be right, but also allows better informed minorities to override the crowd opinion in specific cases. It does this robustly in all of the studies that the authors conducted. It will be extremely interesting to see this novel algorithm deployed in anger, i.e. in a non-theoretical environment. If its undoubted promise is borne out – and the evidence to date suggests that it will be – then statisticians will have a new and powerful tool in their arsenal and a range of predictive activities will be improved.

The scope of applicability of the SP technique is as wide as that of any wisdom of the crowd approach and, to repeat the comments made by the authors in their abstract, has recently included:

[…] political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano

If the author’s initial findings are repeated in “live” situations, then the refinement to the purely democratic approach that SP brings should elevate an already useful approach to being an indispensable one in many areas.

I will let the authors have a penultimate word [14]:

Although democratic methods of opinion aggregation have been influential and productive, they have underestimated collective intelligence in one respect. People are not limited to stating their actual beliefs; they can also reason about beliefs that would arise under hypothetical scenarios. Such knowledge can be exploited to recover truth even when traditional voting methods fail. If respondents have enough evidence to establish the correct answer, then the surprisingly popular principle will yield that answer; more generally, it will produce the best answer in light of available evidence. These claims are theoretical and do not guarantee success in practice, as actual respondents will fall short of ideal. However, it would be hard to trust a method [such as majority vote or confidence-weighted vote] if it fails with ideal respondents on simple problems like [the Philadelphia one]. To our knowledge, the method proposed here is the only one that passes this test.

US Presidential Election Polling [borrowed from Wikipedia]

The ultimate thought I will present in this article is an entirely speculative one. The authors posit that their method could be applied to “potentially controversial topics, such as political and environmental forecasts”, while cautioning that manipulation should be guarded against. Their suggestion leads me wonder what impact on the results of opinion polls a suitably formed surprisingly popular questionnaire would have had in the run up to both the recent UK European Union Referendum and the plebiscite for the US Presidency. Of course it is now impossible to tell, but maybe some polling organisations will begin to incorporate this new approach going forward. It can hardly make things worse.
 


 
Notes

 
[1]
 
According to Wikipedia, the phenomenon that:

A large group’s aggregated answers to questions involving quantity estimation, general world knowledge, and spatial reasoning has generally been found to be as good as, and often better than, the answer given by any of the individuals within the group.

The authors of the Nature paper question whether this is true in all circumstances.

 
[2]
 
Prelec, D., Seung, H.S., McCoy, J., (2017). A solution to the single-question crowd wisdom problem. Nature 541, 532–535.

You can view a full version of this paper care of Springer Nature SharedIt at the following link. ShareIt is Springer’s content sharing initiative.

Direct access to the article on Nature’s site (here) requires a subscription to the journal.

 
[3]
 
This example is perhaps an interesting rejoinder to the increasing lack of faith in experts in the general population, something I covered in Toast.

Of course the answer is approximately: 1.6726219 × 10-27 kg.

 
[4]
 
I have lightly edited this section but abjured the regular bracketed ellipses (more than one […] as opposed to conic sections as I note elsewhere). This is both for reasons of readability and also as I have not yet got to some points that the authors were making in this section. The original text is a click away.
 
[5]
 
My wife is from this state.
 
[6]
 
Indeed it sometimes seems that the more wrong the opinion, the more certain that people believe it to be right.

Here the reader is free to insert whatever political example fits best with their worldview.

 
[7]
 
Because many people replying “no” felt that a majority would disagree with them.
 
[8]
 
Again I have lightly edited this text.
 
[9]
 
To provide a bit of detail, here the team created a questionnaire with 50 separate questions sets of the type:

  1. {Most populous city in a state} is the capital of {state}: yes or no?
     
  2. How confident are you in your answer (50- 100%)?
     
  3. What percentage of people surveyed will respond “yes” to this question? (1 – 100%)

This was completed by 83 people split between groups of undergraduate and graduate students at both MIT and Princeton. Again see the paper for further details.

 
[10]
 
And eliding some nuances such as some responses being binary (yes/no) and others a range (e.g. the dermatologists were asked to rate the chance of malignancy on a six point scale from “absolutely uncertain to absolutely certain”). Also respondents were asked to provide their confidence in some studies and not others.
 
[11]
 
Once more with some light editing.
 
[12]
 
This is a technical term employed in scientific circles an I apologise if my use of jargon confuses some readers.
 
[13]
 
Again please see the actual paper for details.
 
[14]
 
Modified very slightly by my last piece of editing.