[vpFREE] Re: how to tell if your machine is fair?

 


OK, you converted 23 of 200 flushes. That's 11.5%. Whenever you have 4 cards to a flush, there are 9 cards left in the deck that can make the hand and 48 cards total. So, that is 18.75% you should expect to make or 37.5 hands of 200. Was what you experienced bad luck? Consult the binomial distribution. http://en.wikipedia.org/wiki/Binomial_distribution

Your n=200 and p=.1875, np=37.5 and the variance is np(1-p)=30.5. The square root of that is the standard deviation = 5.5. So, what happened to you was (37.5-23)/5.5 = 2.6 standard deviations below the mean.

In general, you had a bad day. What happened to you (the down swing) happens less than 0.5% of the time (just under 0.4%). 4 out of every 1,000 times. You should be pissed, but not ready to take legal action. Try it again randomly (without the "selection bias" that the other poster refers to, you may have only remembered this bad time) and if the same thing happens, then something might be wrong. Since you had a large expected value of number of hands (18.75) you can approximate all this stuff with the normal distribution too via some online calculator.

--- In vpFREE@yahoogroups.com, "armchairpresident" <smellypuppy@...> wrote:
>
> I like the analogy; This immediately brought another question to mind - can people (not the machine) be gaffed? I don't mean this in a bad way, just that in the big picture almost every study I have seen, people are distributed along a probability curve. I didn't look at the data, but I even remember a study about lightning and that certain people were more or less predisposed to being struck and that those once struck were more predisposed to a second strike.
>
> My point is, what if everyone's baseline varied from the expected for every VP hand when examined on many different machines for a large N value of hands? For example, I thought I was not converting 4 to a flush near the expected frequency regardless of what machine I played. I then kept count over several sessions on 9/6 job and several different machines. I only kept count for 200 consecutive events 23 converted out of this lot. Perhaps I am gaffed in a negative manner for this hand?
>
>
>
>
>
>
> --- In vpFREE@yahoogroups.com, "Frank" <frank@> wrote:
> >
> > OK. You completely misunderstood what I was saying. It will completely invalidate the testing utility I'm making if people use their currently existing data. Why? Imagine this.
> >
> > You post in the newspaper that you'd like to do a study into how likely it is to be hit by lighting. Not surprisingly, the people that answer your add are those most concerned about this issue (AKA people that have been hit). After looking at all your volunteer test subjects you conclude that the chances of being hit by lighting are 1 in 1.
> >
> > Problem: All the people that weren't hit by lighting, didn't volunteer.
> >
> > Solution: Take the volunteers, but toss out all that has happened to them in their lives before they signed up for your study. Dismiss their preexisting data, and collect new data from this point on.
> >
> > The rule of thumb with statistical tests is never to use the data that made you want to do the test. Test forward from the point in time you decide to do the test and dismiss what's gone before.
> >
> > All data by definition is past data. The past I'm talking about here, that should be ignored, is what's happened before you decided to do the test.
> >
> > ~FK
> >
> > --- In vpFREE@yahoogroups.com, "cdfsrule" <cdfsrule@> wrote:
> > >
> > > I know I am taking this quote out of context (sorry FK), but your statement:
> > >
> > > --- In vpFREE@yahoogroups.com, "Frank" <frank@> wrote:
> > > >
> > > >Statistical test cannot be used on anything that's already happened, or else one opens the door for selective recruitment and confirmation bias.
> > > >
> > > > ~FK
> > > >
> > >
> > > is absolutely not true. In fact, statistical tests can only be used on "data"-- that is on stuff that already has been observed, computed, recorded, etc. In fact, statistical tests are used in determining (in the sense of ascribing a probability to) if there is or was bias, selective recruitment, etc. of events (and associated data) that has already occured.
> > >
> > > Take a look at: http://en.wikipedia.org/wiki/Statistical_hypothesis_testing
> > >
> >
>

__._,_.___
Recent Activity:
.

__,_._,___