Jump to content

Err America files Chapter 11


KD in CA

Recommended Posts

I am measuring the regression of I.Q. scores.

 

No, you're not. No matter how many times you say it, you'll still be wrong.

 

Someone who obtains a score that's far from the mean on the first test, more often than not, will get a score that's closer to the mean upon taking a second test. This is what the article meant when it said that someone who scored a 750 on the math section the first time around will, on average, get a 725 upon taking the retest.

831389[/snapback]

 

Which has precisely jack sh-- to do with error. :rolleyes:

Link to comment
Share on other sites

  • Replies 598
  • Created
  • Last Reply

Top Posters In This Topic

That's what he means by "you seem to have a hard time understanding it".  You're communicating a completely different message, because you can't differentiate "probability" from "error".  :doh:

 

And it's really not that hard: You roll a pair of dice, it's probabilistic.  When they stop moving, it's deterministic.  The system has an expectation value (a "mean") of 7.  Rolls subsequent to very low or very high rolls (2 or 3, or 11 or 12) will tend to regress toward the mean not because dice are error-prone or inaccurate - they're not, they're very accurate and not the least bit subject to error.  It's because there's only 3 ways to roll a 2 or 3, and 33 other possibilities the next time you roll.  Regression toward the mean happens because your current measure is deterministic, but your future measure is probabilistic.

 

That's probability, not error.  It's really !@#$ing simple.  Like I said, I can teach that to a three year old.  Why the !@#$ do you have so much trouble with it?  :rolleyes:

831390[/snapback]

You actually managed to correctly describe the difference between probabilistic and deterministic systems! I'm impressed. I'd be even more impressed if you actually connected it with the discussion at hand, but one step at a time.

Link to comment
Share on other sites

You actually managed to correctly describe the difference between probabilistic and deterministic systems! I'm impressed. I'd be even more impressed if you actually connected it with the discussion at hand, but one step at a time.

831398[/snapback]

 

Goody goody for me.

 

Now if, as you say, regression toward the mean is caused by error, explain to me how measurement error causes my dice to regress toward the mean. :rolleyes:

Link to comment
Share on other sites

No, you're not.  No matter how many times you say it, you'll still be wrong.

Which has precisely jack sh-- to do with error.  :doh:

831393[/snapback]

Why am I continuing to beat my head against the brick wall of your incomprehension? I wish I knew . . . :rolleyes:

 

Suppose the following: there exists an I.Q. test. If an average person were to take that test 1000 times, that person's scores would vary within a 20 point range. Suppose someone got a 180 on first taking this test. Do you agree or disagree that this person's expected score upon retaking it is less than 180?

Link to comment
Share on other sites

Goody goody for me.

 

Now if, as you say, regression toward the mean is caused by error, explain to me how measurement error causes my dice to regress toward the mean.  :rolleyes:

831400[/snapback]

Certainly. Suppose you have a die, and you roll it to get some idea as to what its average roll might be. You roll it the first time, and get a six. In this case, your attempt to measure the die's true average roll resulted in an error of 2.5. What will the die yield the second time you roll it? Its expected roll is 3.5--in other words, it will, on average, regress fully toward the mean.

 

In the above case, the expected value of the regression toward the mean was 100%, because the trial was entirely luck-based. Now suppose you're observing something that's based partly on luck, but mostly on skill--such as an I.Q. test. Someone who scores a 180 the first time around is generally going to have been a little lucky. This person's score will, on average, regress toward the mean upon taking a retest. The extent to which the regression toward the mean is expected to take place depends entirely on how luck-based that I.Q. test really was. The more luck-based the test, the greater the expected regression toward the mean.

Link to comment
Share on other sites

Certainly. Suppose you have a die, and you roll it to get some idea as to what its average roll might be. You roll it the first time, and get a six. In this case, your attempt to measure the die's true average roll resulted in an error of 2.5. What will the die yield the second time you roll it? Its expected roll is 3.5--in other words, it will, on average, regress fully toward the mean.

831410[/snapback]

 

What????? Oh, sweet mother of God, this is the most !@#$ed up thing I have ever read. :rolleyes::doh::lol: I think I just hurt myslf laughing...

 

Please...enlighten the class. How do you roll a 3.5 with a single die? And how is the die in error if you roll a six? :lol:

Link to comment
Share on other sites

What?????  Oh, sweet mother of God, this is the most !@#$ed up thing I have ever read.  :rolleyes:  :doh:  :lol:  I think I just hurt myslf laughing...

 

Please...enlighten the class.  How do you roll a 3.5 with a single die?  And how is the die in error if you roll a six?  :lol:

831433[/snapback]

How do you roll a 3.5 with a single die? I suggest you keep rolling dice until you figure it out. (That should keep him busy for a while. :lol: )

 

I know my die analogy seems a little strange, but it's actually the same thought process that goes into giving someone any other sort of a test. Suppose you give someone an I.Q. test. The number you're really interested in is how well this person would do, on average, if given 1000 I.Q. tests (assuming no fatigue effects or learning effects). You're using the first score on the first I.Q. test to estimate what their average score would be across those 1000 tests.

 

With the die, you're using that first die roll to estimate what your average roll would be if you were to roll the die 1000 times. A six sided die rolled 1000 times will, on average, give you a roll of 3.5.

Link to comment
Share on other sites

How do you roll a 3.5 with a single die? I suggest you keep rolling dice until you figure it out. (That should keep him busy for a while.  :rolleyes: )

 

I know my die analogy seems a little strange, but it's actually the same thought process that goes into giving someone any other sort of a test. Suppose you give someone an I.Q. test. The number you're really interested in is how well this person would do, on average, if given 1000 I.Q. tests (assuming no fatigue effects or learning effects). You're using the first score on the first I.Q. test to estimate what their average score would be across those 1000 tests.

 

With the die, you're using that first die roll to estimate what your average roll would be if you were to roll the die 1000 times. A six sided die rolled 1000 times will, on average, give you a roll of 3.5.

831455[/snapback]

Why would you use the 1st die roll to estimate what your average roll would be?

 

Why do you assume error is normally distrubuted in your IQ examples? Why couldn't (or more accurately, wouldn't) there be a different error distribution (or combination of error distributions)?

 

What do the last 15 pages of this discussion have to do with your position that eugenics are not only desirable but politically feasible?

 

And finally, why has this thread lasted past my prediction of 18 pages?

 

There are several other questions about this discussion, but many of them have already been asked.

Link to comment
Share on other sites

Suppose you have a die, and you roll it to get some idea as to what its average roll might be. You roll it the first time, and get a six. In this case, your attempt to measure the die's true average roll resulted in an error of 2.5. What will the die yield the second time you roll it? Its expected roll is 3.5--in other words, it will, on average, regress fully toward the mean.

 

 

831410[/snapback]

 

The probability of 1-6 coming up with one die is exactly the same. A normal distribution will only occur by increasing numbers of dice.

Link to comment
Share on other sites

Certainly. Suppose you have a die, and you roll it to get some idea as to what its average roll might be. You roll it the first time, and get a six. In this case, your attempt to measure the die's true average roll resulted in an error of 2.5. What will the die yield the second time you roll it? Its expected roll is 3.5--in other words, it will, on average, regress fully toward the mean.

 

In the above case, the expected value of the regression toward the mean was 100%, because the trial was entirely luck-based. Now suppose you're observing something that's based partly on luck, but mostly on skill--such as an I.Q. test. Someone who scores a 180 the first time around is generally going to have been a little lucky. This person's score will, on average, regress toward the mean upon taking a retest. The extent to which the regression toward the mean is expected to take place depends entirely on how luck-based that I.Q. test really was. The more luck-based the test, the greater the expected regression toward the mean.

831410[/snapback]

 

Oh. My. God. Please, please someone tell me that holcombs arm didnt just post. please tell me that i am completely sh-- faced and imagining that he claimed that rolling a die has an expected roll of 3.5, and that there is ERROR involved in rtolling a die. Holy sh--.

 

How does someone this !@#$ing stupid actually manage to log onto the intertubes on a daily basis?

 

Lets see, we now cna add discrete and continuous variables to the growing list of things HA has no !@#$ing clue about.

 

Rolling a die has no error !@#$tard! Its a simple probability of discrete values. The die does not have a true average roll! you roll the die and get a number from 1-6. Theres a 1/6 chance of any given number. The next time you roll the die, the probabilities are the same. There is going to be no "regression" using a die roll. What a knumbknuts! Just because you roll a 6 once on a die doesnt make it more likely that you are going to roll a low number the next time. A die does not have an average roll value.

 

You're the (*^*&%^$^#at the casino that would watch a coin land heads up 50 times in a row, and then bet your life savings on tails with the next flip, thinking that the probability of it coming up tails is somehow greater on the next flip.

 

but keep on smacking your head against the wall. And please, please, PM me when you roll 3.5 on your die.

Link to comment
Share on other sites

How do you roll a 3.5 with a single die? I suggest you keep rolling dice until you figure it out. (That should keep him busy for a while.  :rolleyes: )

 

I know my die analogy seems a little strange, but it's actually the same thought process that goes into giving someone any other sort of a test. Suppose you give someone an I.Q. test. The number you're really interested in is how well this person would do, on average, if given 1000 I.Q. tests (assuming no fatigue effects or learning effects). You're using the first score on the first I.Q. test to estimate what their average score would be across those 1000 tests.

 

With the die, you're using that first die roll to estimate what your average roll would be if you were to roll the die 1000 times. A six sided die rolled 1000 times will, on average, give you a roll of 3.5.

831455[/snapback]

 

But you specifically said the die has an "expected roll" of 3.5 - your words, not mine. How do you expect to roll a 3.5 with a die? You also said that a roll of 6 has an error of 2.5 - your words again, not mine. How is the die in error?

 

Very simple questions. You shouldn't feel the need to dodge them.

Link to comment
Share on other sites

With the die, you're using that first die roll to estimate what your average roll would be if you were to roll the die 1000 times. A six sided die rolled 1000 times will, on average, give you a roll of 3.5.

831455[/snapback]

 

No, your arguement has absolutely zero statictical value and holds no water. Once again, you are working with discrete variables, and trying to justify a continuous variable arguement.

 

You do realize, dont you, that your die example is equivalent to saying that if i put all the letters in the alphabet in a hat and randomly pick 1 out, that the average letter will be M? :rolleyes:

 

Actually, using your ass backwards logic, since there are 26 potential values, the average value of the letter you pick will be M.5, and picking other letters will mean there is error in your sample. :doh:

 

Hopefully now you understand how ridiculously stupid you sound.

Link to comment
Share on other sites

No, your arguement has absolutely zero statictical value and holds no water. Once again, you are working with discrete variables, and trying to justify a continuous variable arguement.

 

You do realize, dont you, that your die example is equivalent to saying that if i put all the letters in the alphabet in a hat and randomly pick 1 out, that the average letter will be M?  :rolleyes:

 

Actually, using your ass backwards logic, since there are 26 potential values, the average value of the letter you pick will be M.5, and picking other letters will mean there is error in your sample. :doh:

 

Hopefully now you understand how ridiculously stupid you sound.

831655[/snapback]

 

I've got $20 that says he doesn't, but responds that we're wrong and don't understand the relationship between probability and error.

Link to comment
Share on other sites

I've got $20 that says he doesn't, but responds that we're wrong and don't understand the relationship between probability and error.

831681[/snapback]

 

I'll raise you $30.50 that he'll accuse you of ignoring the debate and using the forum to insult him.

Link to comment
Share on other sites

Can't do it.  I already measured the bet as $20, so your bet is wrong by $10.50.  But maybe if you bet again, it'll regress toward $20.

831696[/snapback]

 

This is !@#$ing genius! The MIT card counters have nothing on us! We have holcombs arm math on our side.

 

You do realize you have stumbled upon a way to have an advantage over the house in Vegas. By betting $30.50 and regressing your bet towards the mean due to error and chance (since they are obviously the same thing), it means that you'll win whatever on top of your $30.50 bet, but lose only $20.

 

So heres our plan. We go to vegas, and say at blackjack, we bet $30.50. If we win and it pays out 2:1, we win $61. But if we lose, we only lose $20, and still come out with $10.50 in our pocket. Then we use this extra money, take it to the craps table, and hit the jackpot big time when someone hits 7 by rolling double 3.5's.

 

We're gonna be rich!

Link to comment
Share on other sites

But you specifically said the die has an "expected roll" of 3.5 - your words, not mine.  How do you expect to roll a 3.5 with a die?  You also said that a roll of 6 has an error of 2.5 - your words again, not mine.  How is the die in error?

 

Very simple questions.  You shouldn't feel the need to dodge them.

831624[/snapback]

On average you expect the die to roll a 3.5. This is a conglomerate of six separate expectations involving the numbers 1 - 6.

 

Obviously, a die roll of 6 is not an error. It is a die roll of 6. Your attempt to use that particular die roll to estimate the average value of an average die roll, however, will result in an error of 2.5.

Link to comment
Share on other sites

On average you expect the die to roll a 3.5. This is a conglomerate of six separate expectations involving the numbers 1 - 6.

 

Obviously, a die roll of 6 is not an error. It is a die roll of 6. Your attempt to use that particular die roll to estimate the average value of an average die roll, however, will result in an error of 2.5.

831741[/snapback]

 

you cannot get a mathematical average, or mean, using discrete values/variables.

 

You cannot assign an average value to a die roll, because it is a probability. There is no !@#$ing error in a die roll!

 

According to you, i can pick my alphabet letters, and i should expect an average letter value of M, but i may have an error value of G or X.

 

If being stupid was a crime, you would have been put to death a long long time ago.

Link to comment
Share on other sites

On average you expect the die to roll a 3.5. This is a conglomerate of six separate expectations involving the numbers 1 - 6.

 

Obviously, a die roll of 6 is not an error. It is a die roll of 6. Your attempt to use that particular die roll to estimate the average value of an average die roll, however, will result in an error of 2.5.

831741[/snapback]

have you taken a statistics class...ever?

 

you can equally expect 1-6 to show up...THEY ALL HAVE EQUAL PROBABILITY.

Link to comment
Share on other sites

Why would you use the 1st die roll to estimate what your average roll would be?

A very good question. The answer is that in the real world, you'd never do something so foolish as that.

 

Suppose you wanted to know what someone's average score would be if given 1000 I.Q. tests. Your way of estimating this average score would be to give the person just one I.Q. test. Unfortunately, the person might get lucky or unlucky on that one I.Q. test, so your estimate of their true I.Q. might contain some error.

 

Bungee Jumper introduced dice into this discussion. To make dice analogous to the I.Q. test, you have to roll a single die a single time to estimate what that die's average roll would be over the course of 1000 rolls. It's actually a more confusing concept to understand with dice than it is with test scores, but Bungee Jumper challenged me to present the topic by means of dice.

 

Why do you assume error is normally distrubuted in your IQ examples?  Why couldn't (or more accurately, wouldn't) there be a different error distribution (or combination of error distributions)?

As long as error is symmetrically distributed, regression toward the mean will take place. By "symmetrically distributed error" I mean that your chances of getting lucky on the test are equal to your chances of getting unlucky. I chose normally distributed error for my simulation because I had to choose something, and normally distributed error is as good as anything. The average was zero, meaning that on average someone got neither lucky nor unlucky.

 

You ask about different error distributions. Suppose a uniform error distribution. Someone taking an I.Q. test has a 20% chance of getting really unlucky and scoring 20 points too low, a 20% chance of getting mildly unlucky and scoring 10 points too low, a 20% chance of getting scored correctly, etc. Now suppose you're testing a population with 10 people who have true I.Q.s of 190, 100 people with true I.Q.s of 180, etc. Of your 10 190s, 2 will get really lucky on that I.Q. test, and score a 210. If, therefore, someone who scored a 210 came up to you and said, "hey, buddy, I'm retaking the test, what do you think I'll get," the answer would be 190. You know that you're dealing with a person with a true I.Q. of 190. This person might once again get really lucky and score a 210, or he might get really unlucky and score a 170. If you average out all the scores (weighted by probability) this person's expected score is 190 for the second test taking. The same logic applies to those who scored a 200 on the test the first time, those who scored a 190, those who scored a 180, or anyone else who scored above the mean. All of them are expected to get a score somewhat closer to the mean upon retaking the test.

What do the last 15 pages of this discussion have to do with your position that eugenics are not only desirable but politically feasible?

When two exceptionally smart people have children, their children's measured I.Q.s tend to be somewhat lower than the measured I.Q.s of their parents. I'm arguing that, in general, people with exceptionally high I.Q. scores were not only disproprortionately smart, but also disproportionately lucky on I.Q. tests. As the article I linked to stated, someone who gets a 750 on the math section of the SAT is more likely to be someone who should have gotten a 725 but got lucky, than someone who should have gotten a 775 but got unlucky. This is because there are more true 725s than true 775s.

 

Suppose that two people who got a 750 on the math section of the SAT got married and had kids. Suppose those kids scored 725s on the math section of the SAT. Do the lower scores mean the kids are dumber than their parents? Not necessarily. The average person who scored a 750 on the math section the first time will, on average, score a 725 the second time. In this example, the children's scores reflect the 725s their parents would have gotten had they neither been lucky nor unlucky. The fact that children's scores slightly regress toward the mean when compared with those of their parents is largely, perhaps entirely, due to the fact that their parents' scores were measured incorrectly in the first place. This is why regression toward the mean isn't the dynamite objection to a eugenics program that some have claimed.

Link to comment
Share on other sites

×
×
  • Create New...