edit: June 16, 2012 – Fixed up some of my poor grammar.
For my first post I would like to present and discuss what I consider to be one of the most interesting yet straightforward results of Bayesian inference. The following scenario is based on one given by E.T. Jaynes in his tour de force book “Probability Theory: The Logic of Science”, in chapter 5 section 3 “Converging and diverging views”. I highly recommend this book to anyone interested in probability theory and/or logical inference by the way; it is extremely enlightening on many matters.
Anyway, consider the following scenario. You are sitting at home watching TV, when Mr appears with a sensational claim that some commonly used drug is unsafe. We will call this information , and call the hypothesis that the drug is safe . Prior information held by each individual is denoted . Along with you (Mr ), Mr , and Mr also see this claim. Their (and your) prior beliefs that the drug is safe ( are 0.9, 0.1 and 0.9 respectively, i.e. initially Mr and Mr fairly strongly believe the drug to be safe, while Mr fairly strongly thinks it to be unsafe.
Now, one might expect that if Mr , and are all rational people who reason perfectly logically in accordance with Bayesian rules, that upon receiving some information (that the drug is unsafe) that their belief in the safety of the drug would be lowered, in all cases. However, Jaynes shows that in fact it depends very much on what each rational person thinks about Mr , since in fact the information they have obtained is not direct evidence that the drug is unsafe. Rather, they learn only that Mr claims that the drug is unsafe. And this turns out to be of great importance.
Let the beliefs of Mr , and about the character of Mr to be as follows: They all agree that if the drug really was proved unsafe, then Mr would indeed be appearing on TV to tell them about it (so ). However, they each have strongly different views about what he would do if the drug IS safe. Say A trusts his honesty, but C does not, and B is somewhere on the fence (so ). To explain a little further, is the probability that Mr would appear on TV claiming that the drug is unsafe (), given that the drug is actually safe (), and given the prior beliefs about the character of Mr ().
Now it is straightforward from Bayes’ theorem to compute the posterior beliefs that A, B, and C will have that the drug is safe, :
The answers for each individual are 0.083, 0.032 and 0.899 for Mr , and respectively. To make it more obvious what has happened allow me to summarise the transition from prior to posterior for each individual in the form :
(initially thinks drug is safe trusts Mr no longer thinks drug is safe)
(initially thinks drug is unsafe is wary of Mr still doesn’t think drug is safe)
(initially thinks drug is safe thinks Mr is a big biased liar still thinks drug is safe)
It is important to make explicit that the variety of initial beliefs about Mr lead to varying conclusions about the safety of the drug, even though everyone involved received exactly the same information from him. It is a straightforward extension of this scenario to imagine Mr appearing on TV claiming the drug to be safe after all, but with some people thinking Mr is trustworthy, and others thinking that Mr is trustworthy. The result would then be that the population becomes polarised between the two camps of opinion about the safety of the drug, even if they had quite moderate prior beliefs. In many ways it is entirely obvious that this should happen, but it is nice to see our intuitions faithfully reproduced by the probability theory.
In this scenario the important thing is not the prior beliefs about the drug, but rather the prior beliefs each person has about the trustworthiness of Mr or Mr . Jaynes goes on in the chapter to show that regardless of prior belief about the drug, the direction in which the belief in the safety of the drug moves is determined by the amount of trust each person has in the messenger. It is in fact possible for the posterior belief in the safety of the drug to move in entirely the opposite direction to that indicated by the message of Mr or Mr , if a person considers it likely enough that they are lying. Unfortunately I am out of time to spend on this right now, but I might make some plots about this and talk about them in a future post.