Jump to content

Err America files Chapter 11


KD in CA

Recommended Posts

  • Replies 598
  • Created
  • Last Reply

Top Posters In This Topic

If you ask him nicely, Ed can probably help with that...

829590[/snapback]

Woah...that's the sickest thing I've heard anyone say around here since this disaster:

 

Nah, I just have that "not so fresh" feeling....

245032[/snapback]

I hope T-Bone's not in earshot. He'd offer to lick you clean.

245038[/snapback]

Link to comment
Share on other sites

And here's where I start laughing my ass off.  How does the error just go away the second time around?  If the take the test again, you still have 10 people with a "real IQ" (which is a total bull sh-- term, by the way  :D) of 190 taking the same test with the same error...which means you get 2 people scoring 200, and your distribution's the same.  You can't just magically eliminate the error.

 

Well, you can, apparently.  But people with a modicum of common sense and bound by reality can't...  :D

829438[/snapback]

This post is a continuation of my last post. You have the same 10 people with an I.Q. of 190; the same 100 people with an I.Q. of 180, the same error-prone I.Q. test, etc. If you were to test the whole population a second time, you'd get the same results as you got the first time. There'd be two people who scored 200 on the test, 26 who scored 190, etc.

 

But look more closely at what happens when you retest subgroups. Consider the two people who scored a 200 on the test. That subgroup contains only lucky test-takers. In order to score a 200 on the test, you have to have an I.Q. of 190, and you have to get lucky on that first test.

 

Now consider the subgroup of people who scored 190 or better on the test. That group contains all the lucky or correctly tested 190s, as well as the 20 lucky 180s.

 

In order to form a subgroup that includes anyone who got unlucky on the first test, you have to set your threshold to 180 or lower. The 180 threshold gives you 2 unlucky 190s--as well as the rest of the 190 population for that matter. It gives you the lucky and correctly tested 180s, and it gives you 200 lucky 170s. Even here, lucky people outnumber unlucky by 200+20+2=222 to 2. What happened to the odd 220 unlucky people whose bad luck should be balancing out all those lucky people's? Those 220 were excluded from the Threshold group based on their bad luck.

 

Being assigned to the Threshold group is largely a function of I.Q., but also a function of getting lucky on that first I.Q. test. Suppose you were to set the Threshold at 180, and ask everyone to take the test a second time. All ten people with an I.Q. of 190 would still be part of the Threshold group, and their results would be the same as before--two 200s, six 190s, two 180s. 20 of the 100 180s were excluded from the Threshold group based on getting unlucky on that first test. Of the 80 180s assigned to the Threshold, 16 will get unlucky on the retest, 48 will score appropriately, and 16 will get lucky and score a 190. In addition, 200 170s were assigned to the Threshold group based on getting lucky the first time they took the test. Of those 200; 40 will get unlucky on the retest and score a 160; 120 will score an appropriate 170, and 40 will get lucky and score 180.

 

The appropriate size for the Threshold group appeared to get smaller. This is because non-Threshold members weren't allowed to take the retest. Consider the 20 180s who got unlucky on the first test. These people weren't retested; and so weren't available to take the places of the 16 180s who appeared to exit the Threshold group on the second test. Or consider the 800 170s who scored appropriately or got unlucky on the first test. Because those 170s weren't retested, they weren't available to take the places of the 160 170s who got lucky on the first test but unlucky or neutral the second time around.

 

By retesting only people who did very well on the first I.Q. test, you're retesting a subgroup of people who got disproportionately lucky on that first test. The retest reveals the fact that Threshold members had been selected mostly based on intelligence, but also partly based on their good luck the first time they took the test.

Link to comment
Share on other sites

This post is a continuation of my last post. You have the same 10 people with an I.Q. of 190; the same 100 people with an I.Q. of 180, the same error-prone I.Q. test, etc. If you were to test the whole population a second time, you'd get the same results as you got the first time. There'd be two people who scored 200 on the test, 26 who scored 190, etc.

 

But look more closely at what happens when you retest subgroups. Consider the two people who scored a 200 on the test. That subgroup contains only lucky test-takers. In order to score a 200 on the test, you have to have an I.Q. of 190, and you have to get lucky on that first test.

 

Now consider the subgroup of people who scored 190 or better on the test. That group contains all the lucky or correctly tested 190s, as well as the 20 lucky 180s.

 

In order to form a subgroup that includes anyone who got unlucky on the first test, you have to set your threshold to 180 or lower. The 180 threshold gives you 2 unlucky 190s--as well as the rest of the 190 population for that matter. It gives you the lucky and correctly tested 180s, and it gives you 200 lucky 170s. Even here, lucky people outnumber unlucky by 200+20+2=222 to 2. What happened to the odd 220 unlucky people whose bad luck should be balancing out all those lucky people's? Those 220 were excluded from the Threshold group based on their bad luck.

 

Being assigned to the Threshold group is largely a function of I.Q., but also a function of getting lucky on that first I.Q. test. Suppose you were to set the Threshold at 180, and ask everyone to take the test a second time. All ten people with an I.Q. of 190 would still be part of the Threshold group, and their results would be the same as before--two 200s, six 190s, two 180s. 20 of the 100 180s were excluded from the Threshold group based on getting unlucky on that first test. Of the 80 180s assigned to the Threshold, 16 will get unlucky on the retest, 48 will score appropriately, and 16 will get lucky and score a 190. In addition, 200 170s were assigned to the Threshold group based on getting lucky the first time they took the test. Of those 200; 40 will get unlucky on the retest and score a 160; 120 will score an appropriate 170, and 40 will get lucky and score 180.

 

The appropriate size for the Threshold group appeared to get smaller. This is because non-Threshold members weren't allowed to take the retest. Consider the 20 180s who got unlucky on the first test. These people weren't retested; and so weren't available to take the places of the 180s who appeared to exit the Threshold group on the second test. Or consider the 800 170s who scored appropriately or got unlucky on the first test. Because those 170s weren't retested, they weren't available to take the places of the 160 170s who got lucky on the first test but unlucky or neutral the second time around.

 

By retesting only people who did very well on the first I.Q. test, you're retesting a subgroup of people who got disproportionately lucky on that first test. The retest reveals the fact that Threshold members had been selected mostly based on intelligence, but also partly based on their good luck the first time they took the test.

829688[/snapback]

 

And once again, you pick some random "threshold" that doesnt portray accurately what is going, it just supports your random bull sh--.

 

Eliminating data and picking random thresholds doesnt make a good scientific study. :D

Link to comment
Share on other sites

By retesting only people who did very well on the first I.Q. test, you're retesting a subgroup of people who got disproportionately lucky on that first test. The retest reveals the fact that Threshold members had been selected mostly based on intelligence, but also partly based on their good luck the first time they took the test.

829688[/snapback]

 

So for your second experiment, you set up different conditions from your first by arbitrarily eliminating anyone who's disproportionately unlucky, then compare it to the first, and say "Aha! It's different!"

 

Of course it's different. All you've done is arbitrarily chosen a subset of your data that proves your point, while arbitrarily eliminating the subset that disproves your point. And all you've proven is that you're a !@#$ing retard: you can't arbitrarily discard data just because it's inconvenient!!! Particularly in this case: you're discarding error in the negative direction (i.e. "unlucky"), to prove that positive error (i.e. "lucky") is, in fact positive. Which is not regression toward the mean, it's !@#$ing error!!!!

 

How you honestly believe you know what you're talking about is beyond comprehension. Literally, I know three year olds that have a better understanding of this than you do.

Link to comment
Share on other sites

So for your second experiment, you set up different conditions from your first by arbitrarily eliminating anyone who's disproportionately unlucky, then compare it to the first, and say "Aha!  It's different!" 

 

Of course it's different.  All you've done is arbitrarily chosen a subset of your data that proves your point, while arbitrarily eliminating the subset that disproves your point.  And all you've proven is that you're a !@#$ing retard: you can't arbitrarily discard data just because it's inconvenient!!!  Particularly in this case: you're discarding error in the negative direction (i.e. "unlucky"), to prove that positive error (i.e. "lucky") is, in fact positive.  Which is not regression toward the mean, it's !@#$ing error!!!!

 

How you honestly believe you know what you're talking about is beyond comprehension.  Literally, I know three year olds that have a better understanding of this than you do.

829696[/snapback]

Acting like a three year old doesn't make you one. :D

 

What my example illustrates is that people who get unusually high scores on I.Q. tests, on average, tend to do a little less well the second time they take the test. Whether you set the Threshold at 200, 190, 180, or some other number greater than the mean, you're looking at a group of people that's not only intelligent, but also disproportionately lucky. In my example, the average person who scored a 200 on the I.Q. test had an I.Q. of 190. The average person who scored a 190 on the I.Q. test had an I.Q. in the low 180s. If you knew that a man scored a 200 on an I.Q. test, and knew that he was sitting down to retake the test, you'd expect him to get a 190 on that second test. If you knew that a woman had scored a 190 on the I.Q. test and she was sitting down to retake the test, you'd expect her score the second time around to be in the low 180s; because a low 180s score is the average true I.Q. for those who scored 190 on the test.

Link to comment
Share on other sites

Acting like a three year old doesn't make you one.  :D

 

What my example illustrates is that people who get unusually high scores on I.Q. tests, on average, tend to do a little less well the second time they take the test.

829711[/snapback]

 

No, it doesn't. It illustrates that people who do "better than they should" will not tend to do "better than they should" the second time around. Apply the same reasoning to the other end of the scale, and someone with a 40 IQ who scores 50 the first time should score closer to 40 the second...which is movement away from the overall mean of 100. It is, however, regression toward the mean of the error, which is what you're measuring.

 

And I already know what your stupid little potato-head response is going to be to this: discard those people, and only include those 40-IQ people who scored lower than they should, because they are regressing toward the mean...

Link to comment
Share on other sites

No, it doesn't.  It illustrates that people who do "better than they should" will not tend to do "better than they should" the second time around.  Apply the same reasoning to the other end of the scale, and someone with a 40 IQ who scores 50 the first time should score closer to 40 the second...which is movement away from the overall mean of 100.  It is, however, regression toward the mean of the error, which is what you're measuring.

 

And I already know what your stupid little potato-head response is going to be to this: discard those people, and only include those 40-IQ people who scored lower than they should, because they are regressing toward the mean...

829740[/snapback]

Let's look at the other end of the scale. In this example, there are 10 people with an I.Q. of 10, 100 people with an I.Q. of 20, 1000 people with an I.Q. of 30. They're taking the same error-prone I.Q. test.

 

Of the ten people with an I.Q. of ten, two will get unlucky and score a zero on the I.Q. test. Another six will get the correct score of 10. The remaining two will get lucky and score a 20 on the test.

 

If you look at those who scored a zero on the test, you're looking at only people who got unlucky. Were those two people to retake the test, they would, on average, get a score of ten.

 

Now consider those who scored a 10 on the test. There are the six 10s who were scored correctly; as well as 20 20s who got unlucky on the test. The true average I.Q. for the 26 people who scored 10 is actually a lot closer to 20 than to 10. Were those who scored a 10 the first time around to retake the test, their average score the second time would be in the high teens. It would regress toward the mean value of the distribution.

Link to comment
Share on other sites

Let's look at the other end of the scale. In this example, there are 10 people with an I.Q. of 10, 100 people with an I.Q. of 20, 1000 people with an I.Q. of 30. They're taking the same error-prone I.Q. test.

 

Of the ten people with an I.Q. of ten, two will get unlucky and score a zero on the I.Q. test. Another six will get the correct score of 10. The remaining two will get lucky and score a 20 on the test.

 

If you look at those who scored a zero on the test, you're looking at only people who got unlucky. Were those two people to retake the test, they would, on average, get a score of ten.

 

Now consider those who scored a 10 on the test. There are the six 10s who were scored correctly; as well as 20 20s who got unlucky on the test. The true average I.Q. for the 26 people who scored 10 is actually a lot closer to 20 than to 10. Were those who scored a 10 the first time around to retake the test, their average score the second time would be in the high teens. It would regress toward the mean value of the distribution.

829753[/snapback]

 

Which is EXACTLY the response I predicted: discard those sub-mean scorers who score too high, and those that score too low move toward the mean.

 

You don't even realize that that is the EXACT OPPOSIT of what you do at the other end of the scale, do you? Of course you're seeing something that looks like regression toward the mean...you're treating opposite ends of the distribution differently. :D

Link to comment
Share on other sites

Which is EXACTLY the response I predicted: discard those sub-mean scorers who score too high, and those that score too low move toward the mean.

 

You don't even realize that that is the EXACT OPPOSIT of what you do at the other end of the scale, do you?  Of course you're seeing something that looks like regression toward the mean...you're treating opposite ends of the distribution differently.  :D

829771[/snapback]

 

You are making your arguments overly complicated. I agree completely that the methodology is shaky. However, you do not even need to look that closely to understand his conclusion is wrong:

 

He is trying to say that by retaking the test, he is showing that error is causing a regression towards the mean. However, by retaking the test, you are mitigating the effects of measurement error. If his methodology truely showed a regression towards the mean, he would be helping to prove the opposite hypothesis of his own. It's simple.

Link to comment
Share on other sites

You are making your arguments overly complicated. I agree completely that the methodology is shaky. However, you do not even need to look that closely to understand his conclusion is wrong:

 

He is trying to say that by retaking the test, he is showing that error is causing a regression towards the mean. However, by retaking the test, you are mitigating the effects of measurement error. If his methodology truely showed a regression towards the mean, he would be helping to prove the opposite hypothesis of his own. It's simple.

829882[/snapback]

 

No, actually I'm trying to simplify. I'm hoping he can understand you can't bifurcate your sample set and treat it in two different ways and say the results are the same.

 

I already tried the "You're just measuring error" tack with him. He didn't get it. :D

Link to comment
Share on other sites

I already tried the "You're just measuring error" tack with him.  He didn't get it.  :D

829885[/snapback]

 

This is what happens when you are dealing with someone who is the "0" on the IQ test, but not one of the unlucky ones.

Link to comment
Share on other sites

Which is EXACTLY the response I predicted: discard those sub-mean scorers who score too high, and those that score too low move toward the mean.

 

You don't even realize that that is the EXACT OPPOSIT of what you do at the other end of the scale, do you?  Of course you're seeing something that looks like regression toward the mean...you're treating opposite ends of the distribution differently.  :D

829771[/snapback]

The point I'm making here is simple. Suppose someone who'd previously scored a 200 on an I.Q. test walks into a room to take a retest. The second time around, this person's expected score is 190. Likewise, suppose someone who'd earlier scored a zero on an I.Q. test shows up for a retest. This second person's expected score the second time around is 10. In both cases, the people who got extreme scores on the test will tend to regress toward the mean when given a retest.

 

Nor does regression toward the mean end there. Someone who scored a 190 the first time they took the test will, on average, get a score in the low 180s the second time around. Someone who scored a 10 on the I.Q. test will, on average, get a score in the high teens the second time they take the test.

Link to comment
Share on other sites

You are making your arguments overly complicated. I agree completely that the methodology is shaky. However, you do not even need to look that closely to understand his conclusion is wrong:

 

He is trying to say that by retaking the test, he is showing that error is causing a regression towards the mean. However, by retaking the test, you are mitigating the effects of measurement error. If his methodology truely showed a regression towards the mean, he would be helping to prove the opposite hypothesis of his own. It's simple.

829882[/snapback]

Perhaps I need to be more specific about what my hypothesis actually is. (Although it's not really "my" hypothesis since I read about it elsewhere.) Someone who gets an extremely high score on an I.Q. test is likely to get a somewhat lower score if that person is retested. Someone who gets a very low score on an I.Q. test is likely to get a slightly higher score if retested. This is because people who did very well on the first I.Q. test tend to be disproportionately lucky; while those who did very poorly on the first I.Q. test tend to be disproportionately unlucky. This phenomenon would disappear if there was no measurement error associated with the I.Q. test.

Link to comment
Share on other sites

Perhaps I need to be more specific about what my hypothesis actually is. (Although it's not really "my" hypothesis since I read about it elsewhere.) Someone who gets an extremely high score on an I.Q. test is likely to get a somewhat lower score if that person is retested. Someone who gets a very low score on an I.Q. test is likely to get a slightly higher score if retested. This phenomenon would vanish if there was no measurement error on either of the tests.

830055[/snapback]

 

That phenomenon would also need to exist first. :D

Link to comment
Share on other sites

That phenomenon would also need to exist first.  :D

830059[/snapback]

I've tried explaining it every way I can. Since you still won't believe me, I suggest you go here.

A person who scored 750 out of a possible 800 on the quantitative portion of the SAT takes the SAT again (a different form of the test is used). Assuming the second test is the same difficulty as the first and that there was no learning or practice effect, what score would you expect the person to get on the second test? The surprising answer is that the person is more likely to score below 750 than above 750; the best guess is that the person would score about 725. If this surprises you, you are not alone. This phenomenon, called regression to the mean, is counter intuitive and confusing to many professionals as well as students.

Also:

There are just not many people who can afford to be unlucky and still score as high as 750. A person scoring 750 was, more likely than not, luckier than average. Since, by definition, luck does not hold from one administration of the test to another, a person scoring 750 on one test is expected to score below 750 on a second test. This does not mean that they necessarily will score less than 750, just that it is likely. The same logic can be applied to someone scoring 250. Since there are more people with "true" scores between 250 and 300 than between 200 and 250, a person scoring 250 is more likely to have a "true" score above 250 and be unlucky than a "true" score below 250 and be lucky. This means that a person scoring 250 would be expected to score higher on the second test. For both the person scoring 750 and the person scoring 250, their expected score on the second test is between the score they received on the first test and the mean.

 

This is the phenomenon called "regression toward the mean." Regression toward the mean occurs any time people are chosen based on observed scores that are determined in part or entirely by chance. On any task that contains both luck and skill, people who score above the mean are likely to have been luckier than people who score below the mean. Since luck does not hold from trial to trial, people who score above the mean can be expected to do worse on a subsequent trial.

Link to comment
Share on other sites

I've tried explaining it every way I can. Since you still won't believe me, I suggest you go here.

830088[/snapback]

 

But that's not because of error, you idiot!!!!! :doh:

 

A normally distributed measurement regresses toward the mean because it is a normally distributed measurement, not because it's "wrong".

Link to comment
Share on other sites

But that's not because of error, you idiot!!!!!  :doh:

 

A normally distributed measurement regresses toward the mean because it is a normally distributed measurement, not because it's "wrong".

830290[/snapback]

I suggest you reread the bolded sentence in the quote from my above post.

Regression toward the mean occurs any time people are chosen based on observed scores that are determined in part or entirely by chance.

The element of chance--that is, of measurement error--causes scores to seem more widely distributed than they really are. Thus, someone who scored a 750 on the math section of the SAT is expected to get a 725 upon retaking that test.

Link to comment
Share on other sites

×
×
  • Create New...