Jump to content

Err America files Chapter 11


KD in CA

Recommended Posts

Oh. My. God. Please, please someone tell me that holcombs arm didnt just post. please tell me that i am completely sh-- faced and imagining that he claimed that rolling a die has an expected roll of 3.5, and that there is ERROR involved in rtolling a die. Holy sh--. . . .

You're the (*^*&%^$^#at the casino that would watch a coin land heads up 50 times in a row, and then bet your life savings on tails with the next flip, thinking that the probability of it coming up tails is somehow greater on the next flip.

 

but keep on smacking your head against the wall. And please, please, PM me when you roll 3.5 on your die.

831602[/snapback]

Obviously you didn't understand a single word of my post. Thanks for being consistent.

Link to comment
Share on other sites

  • Replies 598
  • Created
  • Last Reply

Top Posters In This Topic

On average you expect the die to roll a 3.5. This is a conglomerate of six separate expectations involving the numbers 1 - 6.

 

Obviously, a die roll of 6 is not an error. It is a die roll of 6. Your attempt to use that particular die roll to estimate the average value of an average die roll, however, will result in an error of 2.5.

831741[/snapback]

 

How the !@#$ can you expect to get a value that doesn't exist in the system?????? :rolleyes:

 

Let me make this as clear as I can: A die has six faces. They are numbered: 1,2,3,4,5,6. No face is numbered three and a half. You can never EXPECT to roll three and a half. You cannot derive an "expectation" of some event that doesn't exist from a "conglomerate" of expectations that do exist. It's akin to saying the "expected value" of a coin flip is "hails" or "teads", because it's a "conglomeration" of heads and tails.

 

And since when is rolling a die an attempt to measure an average value of the die? Where the !@#$ do you get this sh-- from? :doh:

What, exactly, is your !@#$ing problem with this concept?

Link to comment
Share on other sites

This is !@#$ing genius! The MIT card counters have nothing on us! We have holcombs arm math on our side.

 

You do realize you have stumbled upon a way to have an advantage over the house in Vegas. By betting $30.50 and regressing your bet towards the mean due to error and chance (since they are obviously the same thing), it means that you'll win whatever on top of your $30.50 bet, but lose only $20.

 

So heres our plan. We go to vegas, and say at blackjack, we bet $30.50. If we win and it pays out 2:1, we win $61. But if we lose, we only lose $20, and still come out with $10.50 in our pocket. Then we use this extra money, take it to the craps table, and hit the jackpot big time when someone hits 7 by rolling double 3.5's.

 

We're gonna be rich!

831736[/snapback]

 

And if they don't roll double 3.5's, we'll know the game's rigged, because the dice were wrong! Brilliant! :rolleyes:

Link to comment
Share on other sites

How the !@#$ can you expect to get a value that doesn't exist in the system??????  :rolleyes:

 

Let me make this as clear as I can: A die has six faces.  They are numbered: 1,2,3,4,5,6.  No face is numbered three and a half.  You can never EXPECT to roll three and a half.  You cannot derive an "expectation" of some event that doesn't exist from a "conglomerate" of expectations that do exist.  It's akin to saying the "expected value" of a coin flip is "hails" or "teads", because it's a "conglomeration" of heads and tails.

 

And since when is rolling a die an attempt to measure an average value of the die?  Where the !@#$ do you get this sh-- from?  :doh:

What, exactly, is your !@#$ing problem with this concept?

831779[/snapback]

For the die roll example to be consistent with the I.Q. test example, the goal of a single die roll has to be to predict the average value of a die roll over the course of 1000 rolls. The stuff about discrete faces, and all that other junk, is just an excuse for you to a) hit me over the head by pointing out perfectly obvious facts about dice as though I was unaware of these things, and b) hide from the question I asked earlier which you didn't answer. Suppose someone scored a 750 on the math section of the SAT the first time they took it. They're going in to retake the test. Is this person's expected score on the retest 750, or is it less? That question is at the heart of the regression toward the mean debate, and not all this junk about dice.

Link to comment
Share on other sites

:rolleyes:  :doh:  :lol: OMG, I am really starting to get sore from all of this laughing  :lol:  :lol:  :lol:

831791[/snapback]

Do you remember that big guy from last night--the one who slipped something in your drink when you weren't looking? Do you remember how everything sort of went all blurry after that? And you're wondering why you feel sore today? Um, yeah. About that . . .

Link to comment
Share on other sites

Bungee Jumper introduced dice into this discussion. To make dice analogous to the I.Q. test, you have to roll a single die a single time to estimate what that die's average roll would be over the course of 1000 rolls. It's actually a more confusing concept to understand with dice than it is with test scores, but Bungee Jumper challenged me to present the topic by means of dice.

 

No, I didn't. I challenged you to explain dice. You couldn't even do that.

 

As long as error is symmetrically distributed, regression toward the mean will take place. By "symmetrically distributed error" I mean that your chances of getting lucky on the test are equal to your chances of getting unlucky. I chose normally distributed error for my simulation because I had to choose something, and normally distributed error is as good as anything. The average was zero, meaning that on average someone got neither lucky nor unlucky.

 

Seriously...what part of "error and chance are two completely different things" are you having trouble with?

 

Suppose that two people who got a 750 on the math section of the SAT got married and had kids. Suppose those kids scored 725s on the math section of the SAT. Do the lower scores mean the kids are dumber than their parents? Not necessarily. The average person who scored a 750 on the math section the first time will, on average, score a 725 the second time. In this example, the children's scores reflect the 725s their parents would have gotten had they neither been lucky nor unlucky. The fact that children's scores slightly regress toward the mean when compared with those of their parents is largely, perhaps entirely, due to the fact that their parents' scores were measured incorrectly in the first place.

 

What? The kids score closer to the mean because their parents' scores were wrong? Does that actually make sense to anyone? :rolleyes:

Link to comment
Share on other sites

For the die roll example to be consistent with the I.Q. test example, the goal of a single die roll has to be to predict the average value of a die roll over the course of 1000 rolls.

 

No, a die roll is NEVER consistent with a normally distributed data set...because a die roll is not normally distributed. A pair of dice, however, is a reasonable approximation...and is the example I used.

 

And a pair of dice does not have error either. As I said many posts ago: you roll an 11, your next roll will tend to regress toward the mean because of probability: the distribution of possible values is such that there's three chances to roll an 11 or better, but 33 to roll less than 11. You have an 11 in 12 chance of getting a roll closer to the mean value of 7 than otherwise. So let's say you roll a...oh, let's say 9, for the sake of argument. That does not mean your error reduced from 4 to 2...it means you rolled a more likely value than 11 or 12. That is the difference between probability and error. That is what causes regression toward the mean.

 

The stuff about discrete faces, and all that other junk, is just an excuse for you to a) hit me over the head by pointing out perfectly obvious facts about dice as though I was unaware of these things,

 

Then you shouldn't have introduced the blithering example of a single die. :doh:

 

...and b) hide from the question I asked earlier which you didn't answer. Suppose someone scored a 750 on the math section of the SAT the first time they took it. They're going in to retake the test. Is this person's expected score on the retest 750, or is it less? That question is at the heart of the regression toward the mean debate, and not all this junk about dice.

831792[/snapback]

 

I think, before I answer such a complicated question, it would be better if you understood the basic statistical concepts embodied in a more simple system like a pair of dice. :rolleyes:

Link to comment
Share on other sites

It makes sense to anyone who understands the article to which I linked.

831803[/snapback]

 

You may have had a point if there was at least one person, out of dozen or so posters here who admit to knowing statistics, who backed you in any argument about statistics.

Link to comment
Share on other sites

I think, before I answer such a complicated question, it would be better if you understood the basic statistical concepts embodied in a more simple system like a pair of dice. 

The fact that you're afraid to answer my question suggests a (slightly) growing awareness that you don't fully understand the regression toward the mean article.

Link to comment
Share on other sites

You may have had a point if there was at least one person, out of dozen or so posters here who admit to knowing statistics, who backed you in any argument about statistics.

831810[/snapback]

A dozen? Where did you come up with that number?

 

Ramius's contributions to this discussion have shown me only that it would be a mistake for any employer to hire him to do serious statistical work.

Link to comment
Share on other sites

The fact that you're afraid to answer this question suggests a (slightly) growing awareness that you don't fully understand the regression toward the mean article.

831818[/snapback]

 

No, the fact that I'm "afraid" (sic) to answer the question suggests that I'm aware you need a basic education in statistics before you can comprehend the answer.

 

The answer, by the way, is neither.

Link to comment
Share on other sites

I'll tell you what, then...send your example to the writers of the article.  See what they think.

831806[/snapback]

If you want to see what they think, maybe you should read the article.

A person who scored 750 out of a possible 800 on the quantitative portion of the SAT takes the SAT again (a different form of the test is used). Assuming the second test is the same difficulty as the first and that there was no learning or practice effect, what score would you expect the person to get on the second test? The surprising answer is that the person is more likely to score below 750 than above 750; the best guess is that the person would score about 725. If this surprises you, you are not alone. This phenomenon, called regression to the mean, is counter intuitive and confusing to many professionals as well as students.
Link to comment
Share on other sites

If you want to see what they think, maybe you should read the article.
A person who scored 750 out of a possible 800 on the quantitative portion of the SAT takes the SAT again (a different form of the test is used). Assuming the second test is the same difficulty as the first and that there was no learning or practice effect, what score would you expect the person to get on the second test? The surprising answer is that the person is more likely to score below 750 than above 750; the best guess is that the person would score about 725. If this surprises you, you are not alone. This phenomenon, called regression to the mean, is counter intuitive and confusing to many professionals as well as students.

831830[/snapback]

 

AND IT'S NOT BECAUSE OF ERROR, IT'S BECAUSE OF THE PROBABILITY DISTRIUBTION, YOU IDIOT!!!!

 

Jesus Christ... <_<

Link to comment
Share on other sites

×
×
  • Create New...