Jump to content

Regression toward the mean


Recommended Posts

Of course, by your own logic, if your test scores were that extreme they'd be wrong...  :(

 

As for the rest of this...you are AMAZINGLY clueless.  Phenomenally.  As evidenced by the very simple fact that you can't apply the other discussions and examples that have gone on to your own misbegotten model.  The dice are relevant, because they illustrate regression toward the mean as it's mathematically defined as a function of variance and probability.  The widget example's relevant because it illustrates the difference between measuring population variance and measuring error.  Neither of which you've shown any capability of understanding, fixated as you are in this "measurement error causes regression toward the mean, which is why smart people aren't as smart as they think they are, even though I'm smarter than I think I am and that's not error, so the government should pay me to have smart kids" stupidity.

863195[/snapback]

I think we both agree someone who takes an I.Q. test twice will not get the same score each time. Whether these differences are caused by measurement error or by natural variation in a person's underlying ability to think is not relevant to the mathematical phenomenon I've been describing. I've tacitly assumed that if someone were to take an I.Q. test 1000 times, the results would be normally distributed, and centered around his or her true I.Q. I believe you're working with the same understanding.

 

Given the logic of the above paragraph, it's possible for someone to obtain an I.Q. score higher than his or her true I.Q. (which I refer to as getting "lucky") or a score lower than his or her true I.Q. (getting "unlucky.") For the purposes of the phenomenon I've been describing, it doesn't matter whether the good or bad "luck" is caused by measurement error, or by random variation in someone's underlying ability to think. Either way, it's possible for someone who scored a 140 on an I.Q. test to be a lucky 130 or an unlucky 150. Your dice example doesn't discount this; nor does your widget example, nor does your attempt to debate the meanings of words like "variance" and "measurement error."

Link to comment
Share on other sites

  • Replies 474
  • Created
  • Last Reply

Top Posters In This Topic

Yes, that's right...because it allows you to determine the distribution OF ERROR in the test ("test" hear meaning "the measurement process", including all the variables that can affect the outcome.  NOT meaning "The IQ test" itself.  "Test" actually has a more specific definition that you've been using - yet another word you can't define, what a !@#$ing surprise.)

 

But what you've been saying is that, because of the error, the person's "true IQ" will regress with repeated testing toward the POPULATION MEAN, and not the mean error of zero.  Which is bull sh--.  Furthermore, you've been saying that the test results regress toward the population mean BECAUSE OF THE ERROR, which is complete and utter bull sh--.

 

And you're arguing all this to prove that a eugenics program would work...when your argument is that the people you'd favor in the eugenics program are not as exceptional as is required BY the eugenics program.  Or, to put it more simply, in deference to your little pea-sized brain: HOW DO YOU CHOOSE YOUR BREEDING POPULATION FOR YOUR EUGENICS PROGRAM WHEN YOU "KNOW" YOUR BREEDING POPULATION IS SCORING "TOO HIGH" ON THE SELECTION CRITERIA BECAUSE OF "ERROR"?  :(

 

The basic reason is that you're a retard who can't string different concepts together to make a coherent argument, even in the rare cases when he DOES understand the concepts.

863223[/snapback]

After the mess you made with the word "binomial," your repeated attempts to tell me that I don't know the definitions of specific words carry no credibility. Stick to arguing with logical concepts, please.

 

As for the phenomenon I've been describing, it's obvious that a score of 140 is more likely to signal a lucky 130 than an unlucky 150. Take a group of people who scored a 140 on an I.Q. test. When the lucky 130s in the group retake the test, they will, on average, get 130s. When the unlucky 150s retake the test, they will, on average, get 150s. Because the lucky 130s outnumber the unlucky 150s; the group's score on the retest will be a little closer to the population mean than it was the first time around.

 

As far as the eugenics angle of things, it's true that variation in I.Q. test scores would dampen the effects of a program. A true 140 who got lucky and scored a 150 would be given greater incentives to have kids than a true 150 who got unlucky and scored a 140. The program wouldn't be perfect, but it would be a lot better than the present efforts to improve the quality of the gene pool. "What present efforts?" you ask. Exactly my point. :P

Link to comment
Share on other sites

I think we both agree someone who takes an I.Q. test twice will not get the same score each time. Whether these differences are caused by measurement error or by natural variation in a person's underlying ability to think is not relevant to the mathematical phenomenon I've been describing. I've tacitly assumed that if someone were to take an I.Q. test 1000 times, the results would be normally distributed, and centered around his or her true I.Q. I believe you're working with the same understanding.

 

1) "Measurement error" and "natural variatiion in a person's underlying ability to think" ARE THE SAME !@#$ING THING!!! :(

 

2) Yes, we are working with that same understanding...but...

 

Given the logic of the above paragraph, it's possible for someone to obtain an I.Q. score higher than his or her true I.Q. (which I refer to as getting "lucky") or a score lower than his or her true I.Q. (getting "unlucky.") For the purposes of the phenomenon I've been describing, it doesn't matter whether the good or bad "luck" is caused by measurement error, or by random variation in someone's underlying ability to think. Either way, it's possible for someone who scored a 140 on an I.Q. test to be a lucky 130 or an unlucky 150. Your dice example doesn't discount this; nor does your widget example, nor does your attempt to debate the meanings of words like "variance" and "measurement error."

863248[/snapback]

 

...that only describes the variation in ERROR. It does not describe at all the variation in the POPULATION. Which is what I've been saying for the past seventy pages. All you've been describing is regression OF THE ERROR toward the mean OF THE ERROR (namely: zero) for a given individual. But you INSIST on saying it's identical to the regression toward the POPULATION MEAN, which it isn't.

 

Which is why my dice example doesn't discount that...it refers to regression of extreme values toward the POPULATION MEAN in the absence of ERROR, showing the fundamental difference between the error and regression toward the mean - a fundamental difference you still insist on misunderstanding. The "attempt" to "debate" "variance" and "measurement error", however, DOES speak directly to your misunderstanding, because you are consistently confusing measurement error inherent in the test (i.e. the entire testing process, including such minutiae as whether or not the subject got enough sleep the night before) with the varaince in scores in the overall population. THEY ARE TWO DIFFERENT THINGS.

Link to comment
Share on other sites

As for the phenomenon I've been describing, it's obvious that a score of 140 is more likely to signal a lucky 130 than an unlucky 150. Take a group of people who scored a 140 on an I.Q. test. When the lucky 130s in the group retake the test, they will, on average, get 130s. When the unlucky 150s retake the test, they will, on average, get 150s. Because the lucky 130s outnumber the unlucky 150s; the group's score on the retest will be a little closer to the population mean than it was the first time around.

863272[/snapback]

 

How do you not understand that this isn't valid methodology? You're taking an arbitrary subset of data, saying "See, it behaves a certain way", and completely ignoring the fact that the rest of the data you discarded CANCELS OUT THE BEHAVIOR OF YOUR ARBITRARILY CHOSEN SUBSET. Set it up as gaussian distributions of scores and error, integrate over all space, and you'll see it clearly: ERROR DOES NOT CAUSE REGRESSION TOWARD THE MEAN.

Link to comment
Share on other sites

How do you not understand that this isn't valid methodology?  You're taking an arbitrary subset of data, saying "See, it behaves a certain way", and completely ignoring the fact that the rest of the data you discarded CANCELS OUT THE BEHAVIOR OF YOUR ARBITRARILY CHOSEN SUBSET.  Set it up as gaussian distributions of scores and error, integrate over all space, and you'll see it clearly: ERROR DOES NOT CAUSE REGRESSION TOWARD THE MEAN.

863310[/snapback]

 

he cant answer that or perform those calculations, because hyperstats hasnt told him how.

Link to comment
Share on other sites

As for the phenomenon I've been describing, it's obvious that a score of 140 is more likely to signal a lucky 130 than an unlucky 150. Take a group of people who scored a 140 on an I.Q. test. When the lucky 130s in the group retake the test, they will, on average, get 130s. When the unlucky 150s retake the test, they will, on average, get 150s. Because the lucky 130s outnumber the unlucky 150s; the group's score on the retest will be a little closer to the population mean than it was the first time around.

863272[/snapback]

 

Do you really mean that the normal distribution stops where you say it does? Wouldn't the natural law of gravity hold that you will have a normal distribution around any score? (except for the super genius at the top and the lonely dolt at the bottom)

Link to comment
Share on other sites

How do you not understand that this isn't valid methodology?  You're taking an arbitrary subset of data, saying "See, it behaves a certain way", and completely ignoring the fact that the rest of the data you discarded CANCELS OUT THE BEHAVIOR OF YOUR ARBITRARILY CHOSEN SUBSET.  Set it up as gaussian distributions of scores and error, integrate over all space, and you'll see it clearly: ERROR DOES NOT CAUSE REGRESSION TOWARD THE MEAN.

863310[/snapback]

I make no apology for the fact that the subset is chosen arbitrarily. In fact, that's a necessary step in achieving the phenomenon I've been describing. The only "regression toward the mean" question that's relevant to the underlying eugenics discussion is this: "Suppose someone scores a 140 on an I.Q. test. How well can this person be expected to do upon retaking the test?" The answer is that people who score 140s on I.Q. tests are more likely to be lucky 130s than unlucky 150s; and therefore tend to do less well upon retaking the test. The same logic applies to any other arbitrarily-selected score--individual people tend to regress toward the mean upon being retested.

 

As for your suggestion that I set up Gaussian distributions of scores and error, I already did that in my simulation. I did not, however, "integrate over all space," because doing so would provide no help in answering the question of how an individual with a high I.Q. test score would tend to do upon being retested. First I created a Gaussian population. Each member was given an error term based on a random number converted into its appropriate, probability-based spot in a Gaussian distribution. (For example, the number 0.5 would put you at the midpoint of the Gaussian error distribution; the number 0.82 would put you one standard deviation to the right of the distribution's mean, etc.). I wanted to answer the question, "do people who get high scores on I.Q. tests do equally well the second time around?" Therefore, I only retested those population members whose original scores were above my threshold limit.

Link to comment
Share on other sites

Do you really mean that the normal distribution stops where you say it does?  Wouldn't the natural law of gravity hold that you will have a normal distribution around any score? (except for the super genius at the top and the lonely dolt at the bottom)

863357[/snapback]

Where did I say the normal distribution "stops" at a certain point? The "natural law of gravity" isn't relevant to a discussion of I.Q. test scores. Even the "super genius" at the top is capable of getting lucky and scoring better than his or her true I.Q.; or of getting unlucky and scoring lower.

Link to comment
Share on other sites

Where did I say the normal distribution "stops" at a certain point?

863386[/snapback]

 

How else am I to understand that it's obvious that a score of 140 is more likely to signal a lucky 130 than an unlucky 150? Wouldn't a normal distribution around 140 indicate that you're likely to have an equal number of 130s and 150s? How is one a more obvious 140 than the other?

Link to comment
Share on other sites

How else am I to understand that it's obvious that a score of 140 is more likely to signal a lucky 130 than an unlucky 150?  Wouldn't a normal distribution around 140 indicate that you're likely to have an equal number of 130s and 150s?  How is one a more obvious 140 than the other?

863409[/snapback]

The normal distribution for I.Q.s is (at least supposed to be) centered around 100. 130 is closer to the center of that distribution than 150. Hence, there are more people with I.Q.s of 130 than of 150.

Link to comment
Share on other sites

I make no apology for the fact that the subset is chosen arbitrarily. In fact, that's a necessary step in achieving the phenomenon I've been describing. The only "regression toward the mean" question that's relevant to the underlying eugenics discussion is this: "Suppose someone scores a 140 on an I.Q. test. How well can this person be expected to do upon retaking the test?" The answer is that people who score 140s on I.Q. tests are more likely to be lucky 130s than unlucky 150s; and therefore tend to do less well upon retaking the test. The same logic applies to any other arbitrarily-selected score--individual people tend to regress toward the mean upon being retested.

 

As for your suggestion that I set up Gaussian distributions of scores and error, I already did that in my simulation. I did not, however, "integrate over all space," because doing so would provide no help in answering the question of how an individual with a high I.Q. test score would tend to do upon being retested. First I created a Gaussian population. Each member was given an error term based on a random number converted into its appropriate, probability-based spot in a Gaussian distribution. (For example, the number 0.5 would put you at the midpoint of the Gaussian error distribution; the number 0.82 would put you one standard deviation to the right of the distribution's mean, etc.). I wanted to answer the question, "do people who get high scores on I.Q. tests do equally well the second time around?" Therefore, I only retested those population members whose original scores were above my threshold limit.

863379[/snapback]

 

AND THUS YOU PROVED THAT ERROR REGRESSES TOWARD THE MEAN OF THE ERROR, AS I'VE BEEN SAYING.

 

The problem YOU have is that you think that represents regression to the mean of the population. Because you're too !@#$ing stupid to know the difference. :thumbsup:

Link to comment
Share on other sites

The normal distribution for I.Q.s is (at least supposed to be) centered around 100. 130 is closer to the center of that distribution than 150. Hence, there are more people with I.Q.s of 130 than of 150.

863416[/snapback]

 

Yes, there are more 130s than 150s. But that wasn't my question. My question is how do you know that the 130s are more likely to get 140 than the unlucky 150s?

 

By your logic, wouldn't a lot of those lucky 130s, really be lucky 120s?

Link to comment
Share on other sites

AND THUS YOU PROVED THAT ERROR REGRESSES TOWARD THE MEAN OF THE ERROR, AS I'VE BEEN SAYING.

 

The problem YOU have is that you think that represents regression to the mean of the population.  Because you're too !@#$ing stupid to know the difference.  :thumbsup:

863436[/snapback]

 

It feels liek these threads are a type of choose your own adventure book, but they all lead to the same ending: us being right, and holcombs arm being wrong.

 

No matter what twists and turns he takes, it always leads back to the above stated conclusion.

Link to comment
Share on other sites

It feels liek these threads are a type of choose your own adventure book, but they all lead to the same ending: us being right, and holcombs arm being wrong.

 

No matter what twists and turns he takes, it always leads back to the above stated conclusion.

863446[/snapback]

 

I expect his response will be along the lines of:

 

You just don't understand because you're stupid.  Let me explain it again.  People lucky enough to get high IQ scores get lower the next time around because they're less lucky, and their scores will be closer to the distribution's center.  I can safely ignore everyone else, because I'm a !@#$ing moron, and therefore the effect is regression toward the mean caused by error.

 

You have to admire the sheer stubbornness with which he mindlessly clings to the "If you ignore everything that doesn't prove my point, the rest proves my point, whatever that is" rationalization, though... :thumbsup:

Link to comment
Share on other sites

Yes, there are more 130s than 150s.  But that wasn't my question.  My question is how do you know that the 130s are more likely to get 140 than the unlucky 150s? 

 

By your logic, wouldn't a lot of those lucky 130s, really be lucky 120s?

863442[/snapback]

Suppose you were looking at a population with 10,000 people with true I.Q.s of 130, 1000 people with true I.Q.s of 140, and 100 people with true I.Q.s of 150. Any given person has a 10% chance of getting lucky and scoring 10 points too high, or unlucky and scoring 10 points too low.

 

Of the 1000 140s, 800 will be scored correctly with a 140 on the I.Q. test. Of the 100 150s, 10 will get unlucky and score a 140 on the test. And of the 10,000 130s, 1000 will get lucky and score a 140 on the test. The people who scored a 140 on the test consist of the following:

- 800 people with a true I.Q. of 140

- 1000 people with a true I.Q. of 130

- 10 people with a true I.Q. of 150

 

If you ask this group of 1810 people to retake the test, the 800 140s will (on average) get 140s the second time around, the 10 150s will (on average) get 150s the second time around, and the 1000 130s will get an average score of 130 on their second try. If you average all this out, you'll see that the group as a whole will score lower than 140 the second time around.

Link to comment
Share on other sites

AND THUS YOU PROVED THAT ERROR REGRESSES TOWARD THE MEAN OF THE ERROR, AS I'VE BEEN SAYING.

 

The problem YOU have is that you think that represents regression to the mean of the population.  Because you're too !@#$ing stupid to know the difference.  :thumbsup:

863436[/snapback]

The process of error regressing toward the mean of the error causes those who obtain extreme scores the first time around to, on average, obtain somewhat less extreme scores upon being retested. This is because those with very high scores on the first test are disproportionately lucky, and those with very low scores are disproportionately unlucky.

Link to comment
Share on other sites

The process of error regressing toward the mean of the error causes those who obtain extreme scores the first time around to, on average, obtain somewhat less extreme scores upon being retested. This is because those with very high scores on the first test are disproportionately lucky, and those with very low scores are disproportionately unlucky.

863486[/snapback]

 

See a psychiatrist. Please. You've gone beyond simple stupidity into the realm of mental disorder.

Link to comment
Share on other sites

×
×
  • Create New...