Forums  > General  > probability argument- what am I missing?  
     
Page 1 of 1
Display using:  

Risk_Reward


Total Posts: 5
Joined: Jun 2020
 
Posted: 2020-12-18 15:11
This is from Decision Theory: Principles and Approaches by Parmigiani et al.:


Let x1 and x2 be two Bernoulli trials. Suppose the experimenter’s probabilities are such that P(x1 = 0) = P(x2 = 0) = 0.5 and P(x1 + x2 = 0) = 0.05. Then, P(x1+x2 = 1) = 0.9 and P(x1+x2 = 2) = 0.05.
Let e be the new evidence that x1 = 1, let h1 be the hypothesis that x1 + x2 = 2, and h2 be the hypothesis that x2 = 1. Given e, the two hypotheses are equivalent. Yet, probability-wise, h1 is corroborated by the data, whereas h2 is not.

Why is h2 not corroborated by the data?
The way I understand the problem is, there's some negative correlation between x1 and x2. We know already that unconditional probability of (x1+x2=2) is 0.05, whereas the conditional probability of x1+x2=2, given x1=1 is somewhere above 0.05 and below 0.5.
Given the negative correlation, the probability that x2=1 should be less than 0.5 but also above 0.05, conditional on x1 being 1.

So, why is h2 "not corroborated"?
To me h1 and h2 are both equivalent and equally corroborated by the data.

silverside


Total Posts: 1440
Joined: Jun 2004
 
Posted: 2020-12-18 18:12
you've written this in a confusing way

I suggest you write out the probabilities for the 4 possible outcomes (0,0), (1,0), (0,1) and (1,1) - using venn diagrams as if you were at high school - and thus get the conditional probabilities

corroboration is a word i would associate more with a court of law than with statistics.

Risk_Reward


Total Posts: 5
Joined: Jun 2020
 
Posted: 2020-12-18 21:55
So, P(x1=1)=0.5 and P(x2=1)=0.5, but P(x1+x2=2)=0.05=P(x1+x2=0), these are the givens.

This means that P(x2=1 l x1=1) = 0.1 (i.e. the events are negatively correlated, so x1 being 1 decreases the probability of x2 being also 1).

Now, using the Bayes formula, we calculate the conditional probabilities for the two "alternative" hypotheses in the problem (which are actually equivalent):

for h1: P(x1+x2=2 l x1=1)= P(x1=1 l x1+x2=2) * P(x1+x2=2) / P(x1=1) = 1*0.05 / 0.5 =0.1

for h2: P(x2=1 l x1=1)= P(x1=1 l x2=1) * P(x2=1) / P(x1=1) = 0.1* 0.5 / 0.5 =0.1

So, the conditional probabilities, given the evidence, are the same, no matter how one specifies the hypothesis (whether h1 or h2).

Why does the book then say that "probability-wise, h1 is corroborated by the data, whereas h2 is not."

And yes, agree on the terminology, but the word "corroborated" is used by the author.

Risk_Reward


Total Posts: 5
Joined: Jun 2020
 
Posted: 2020-12-18 21:57
Maybe I am reading this wrong, and the word "corroborated" really means something different...

but if it were to mean the degree to which h1 and h2 are supported by the data (x1 being 1), then this degree is the same for h1 and h2

silverside


Total Posts: 1440
Joined: Jun 2004
 
Posted: 2020-12-19 12:59
I would tend to agree with you, the original author is unclear at best (although there may be some context we are missing)

If you are happy using Bayes' Theorem I wouldn't dwell on the exact wording and just move on to the next chapter.

Risk_Reward


Total Posts: 5
Joined: Jun 2020
 
Posted: 2020-12-22 11:01
That's what I ended up doing.

Thanks for the 2nd opinion!

deeds


Total Posts: 512
Joined: Dec 2008
 
Posted: 2020-12-22 11:48

As I understand it, corroboration is a nascent term of art in bayesian analysis probably born in the work of IJ Good* ...nascent maybe in the sense of pre-Shannon Information as a quantity, a concept in need of focus and precision, option risk before Samuelson (and then Dupire or...?)...i think one of the more concrete lines tries to make a map between Karl Popper's sense of corroboration and Bayesian inference

*(who worked as Turing's assistant and has some wonderful unmined results from that time for lovers of non-parametric approaches)

Understandrists of this stuff (rashomon?) slap me back (with references, please)
Previous Thread :: Next Thread 
Page 1 of 1