Jump to content

Down goes another GOP talking point


Recommended Posts

We're NOT arguing the effect, we're arguing that it's NOT caused by "measurement error", you idiot.  It's caused by what we all - including Wraith - said: the normal distribution of scores about the mean.  Extreme values in a normally distributed sample tend to be less extreme if measured again not because they're in error, but because they're extreme, and the probability distribution of the sample dictates that there's a VERY high probability of getting a lower value than an equivalent or higher value for a given extreme measure.  AND THAT'S NOT ERROR, YOU !@#$ING IDIOT!!!  Refer to my dice (plural, NOT a single die, which does NOT represent normally distributed probability, hence is not applicable) example, or ANY CREDIBLE STATISTICS SOURCE ON THE DAMN PLANET, YOU LOON!!!!!  Like a textbook, maybe. 

 

:)

859047[/snapback]

 

thank you. no one has been arguing that regression toward the mean doesnt exist, we have been correctly arguing that regression towards the mean occurs due to normal distributions and variance (another term that holcombs arm cannot define nor comprehend)

 

to the arm: REGRESSION TOWARDS THE MEAN COMES FROM VARIANCE AND NORMAL DISTRIBUTIONS, WHICH IS NOT ERROR, DUMBASS!!!

Link to comment
Share on other sites

  • Replies 398
  • Created
  • Last Reply

Top Posters In This Topic

Yes it is. Suppose an error-prone height measurement system. The average person who's initially measured at 7'6" will, upon being remeasured, appear to slightly regress toward the mean.

859068[/snapback]

 

The population's mean height? Or the mean error of your measuring system?

 

Because your "Monte Carlo" simulation (and please stop calling it that; it's an insult to people who've done real ones) proved the latter. If your measurement error is normally distributed, your measurement error will regress toward the mean of the measurement error. Again, as I've been saying...you're working with two normal distributions, and confusing the regression of one with the regression of the other.

 

And THAT is why you're a !@#$ing idiot. Because you can't tell the difference between a normally distributed data set, and normally distributed error within the normally distributed data set. That's why I was so damned careful in defining the parameters of the example in the other thread: because I have to demonstrate that there's a normally distributed set of normal distributions at work (i.e. there isn't a normally distributed set of distinct IQ scores, there's a normally distributed set of Gaussian distributions representing the measurement error of each data point), and I have to demonstrate, as everyone including Wraith has stated, that it's the main normal distribution and not the error causing the regression.

 

Not that you'll be even remotely smart enough to understand that...but I'll do it anyway, since I find calculus entertaining. :)

Link to comment
Share on other sites

The population's mean height?  Or the mean error of your measuring system?

 

Because your "Monte Carlo" simulation (and please stop calling it that; it's an insult to people who've done real ones) proved the latter.  If your measurement error is normally distributed, your measurement error will regress toward the mean of the measurement error.  Again, as I've been saying...you're working with two normal distributions, and confusing the regression of one with the regression of the other. 

 

And THAT is why you're a !@#$ing idiot.  Because you can't tell the difference between a normally distributed data set, and normally distributed error within the normally distributed data set.  That's why I was so damned careful in defining the parameters of the example in the other thread: because I have to demonstrate that there's a normally distributed set of normal distributions at work (i.e. there isn't a normally distributed set of distinct IQ scores, there's a normally distributed set of Gaussian distributions representing the measurement error of each data point), and I have to demonstrate, as everyone including Wraith has stated, that it's the main normal distribution and not the error causing the regression.

 

Not that you'll be even remotely smart enough to understand that...but I'll do it anyway, since I find calculus entertaining.  :)

859084[/snapback]

In my Monte Carlo simulation, I began by creating a population with a normally distributed I.Q. To assign each member an I.Q., I started with a random number, and used the norminv command to convert this to a point on the normal distribution. I then measured the I.Q.s of each member of the population. Measurement was based on their true I.Q.s, plus a normally distributed error term with a mean of zero and a standard deviation 1/4 as large as the standard deviation of the underlying population's normal I.Q. distribution.

 

Those who scored above the Threshold level on the I.Q. test were given a second I.Q. test. The second test was based on the same underlying I.Q. as the first test; as well as on the same error formula. As a group, Threshold members consistently scored slightly worse on the retest than they did on the initial test. If you removed measurement error from the test, this phenomenon would disappear.

Link to comment
Share on other sites

Suppose someone scores 140 on an I.Q. test. This person is planning on taking the test a second time. At first glance, you'd think the person's expected score the second time around would be another 140. That isn't the case--the expected score on the retest will be somewhere in the 120s or 130s.

 

There are three possibilities here: a 140 score could indicate someone with an I.Q. of 140. It could indicate someone with a lower I.Q. (130 for example) who got lucky on the test. Or it could indicate someone with an I.Q. of 150 who got unlucky. Of these three possibilities, the second is far more likely than the third. Therefore, the average person who gets a 140 on an I.Q. test has an I.Q. that's less than 140. On average someone who gets a 140 on an I.Q. test will get a somewhat lower score upon being retested.

 

It took Wraith a while to realize that supposed statistics experts such as Bungee Jumper and Ramius were disputing this widely-known, non-controversial phenomenon. But once he realized what the argument was about, he took my side.

859038[/snapback]

If you assume a normal distribution of the sample and a normal distribution of the measurement error, the average of people who had a 140 on the test would have a lower score. But that is due to the normal distributions, not due to the measurement error. A negative bias in the error term would give you similar result. A positive bias would produce results that are contradictory to your expectations. But the reason that subsequent test scores will be lowered is due to the normal distribution of the sample. The phenomenon you are describing occurs because the sample population is normally distributed, not because error is causing the the retest scores to be lowered on average.

 

I think the reason you appear to be confused is because you are assuming that any observed variation of a value from the "true" value must be due to error. It doesn't. There are myriad factors which can produce the variance (including pure random chance).

 

In the classic height measurement example that you are attempting to restate, the regression toward the mean was not in the measurement of the height of men; it was in the measurement of the height of their sons. There would be microscopically small measurement error in those samples, but yet very tall men typically have sons that are shorter than they are and very short men typically have sons that are taller than them. But even then, it doesn't always occur that way. If a very tall man has a son, it is possible that the child upon reaching adulthood would be even taller than he is or if a very short man had a son the child could be even shorter than he is. There you have regression to the mean with absolutely no measurement error to speak of.

 

You have chosen one extremely limited example and have attempted by the basis of how you have extremely narrowly defined your example to show that error causes regression to the mean in general. It doesn't. The regression to the mean is caused, as your debating partners have stated, by the normal distribution of the population and variance within the samples of the population.

Link to comment
Share on other sites

Those who scored above the Threshold level on the I.Q. test were given a second I.Q. test. The second test was based on the same underlying I.Q. as the first test; as well as on the same error formula. As a group, Threshold members consistently scored slightly worse on the retest than they did on the initial test. If you removed measurement error from the test, this phenomenon would disappear.

859088[/snapback]

 

And that's because your simulation is entirely !@#$ed up; as I keep saying, you're measuring the regression of the normally distributed error, but over a data set that you've managed choose in such a half-assed manner that you're not even doing that right. You're choosing a threshhold so that your overall error is overwhelmingly giving a net positive bias...which means the regression toward the mean of the error you've selected for is of course going to have a negative bias, which you mistakenly attribute to the normal distribution of the IQ scores itself. This is because YOU HAVE NO IDEA WHAT YOU'RE !@#$ING DOING!

 

It's also why I need time to do it properly: a correct model of the system isn't a normal distribution of discrete values that each have a normal distribution of error applied after the fact. A single data point under the gaussian, in other words, doesn't have a value of X (140, if you prefer). It has a value of X*exp((e-E)^2)/2*sigma^2, where e is the measured error and E is the mean error (or thereabouts...like I said, I need time to do the math; I haven't had time yet). Like I also said earlier: do the math. Don't do a half-assed "Monte Carlo" (sic) simulation. DO. THE. MATH.

 

Of course, you can't do the math. That would require reading a textbook, which you can't do either, it seems.

Link to comment
Share on other sites

I think the reason you appear to be confused is because you are assuming that any observed variation of a value from the "true" value must be due to error.  It doesn't.  There are myriad factors which can produce the variance (including pure random chance).

859089[/snapback]

 

Oh, no. He understands that. He's already pretty much stated that "chance" and "error" are the same damn thing. :)

Link to comment
Share on other sites

If you assume a normal distribution of the sample and a normal distribution of the measurement error, the average of people who had a 140 on the test would have a lower score.  But that is due to the normal distributions, not due to the measurement error.  A negative bias in the error term would give you similar result.  A positive bias would produce results that are contradictory to your expectations.  But the reason that subsequent test scores will be lowered is due to the normal distribution of the sample.  The phenomenon you are describing occurs because the sample population is normally distributed, not because error is causing the the retest scores to be lowered on average.

 

I think the reason you appear to be confused is because you are assuming that any observed variation of a value from the "true" value must be due to error.  It doesn't.  There are myriad factors which can produce the variance (including pure random chance).

 

In the classic height measurement example that you are attempting to restate, the regression toward the mean was not in the measurement of the height of men; it was in the measurement of the height of their sons.  There would be microscopically small measurement error in those samples, but yet very tall men typically have sons that are shorter than they are and very short men typically have sons that are taller than them.  But even then, it doesn't always occur that way.  If a very tall man has a son, it is possible that the child upon reaching adulthood would be even taller than he is or if a very short man had a son the child could be even shorter than he is.  There you have regression to the mean with absolutely no measurement error to speak of.

 

You have chosen one extremely limited example and have attempted by the basis of how you have extremely narrowly defined your example to show that error causes regression to the mean in general.  It doesn't.  The regression to the mean is caused, as your debating partners have stated, by the normal distribution of the population and variance within the samples of the population.

859089[/snapback]

Your post is intelligently written, eloquently expressed, but nonetheless wrong.

 

For instance, a normally distributed population is not required for regression toward the mean to take place. Consider a population where everyone had an I.Q. of 100; and where people took an error-prone I.Q. test. Those who scored significantly above or below the mean on their first I.Q. test would, on average, appear to regress toward the mean upon retaking the test.

 

I agree that regression toward the mean wouldn't take place in a uniformly distributed population; except at the very extreme edges. For example, if there were equal numbers of people with I.Q.s of 140, 150, and 160, then a score of 150 on an I.Q. test would be just as likely to indicate an unlucky 160 as a lucky 140. Therefore, someone who scored a 150 on an I.Q. test the first time around would, on average, score a 150 upon retaking the test. Regression toward the mean would still exist at the extreme edge of this distribution. Suppose there were equal numbers of 170s, 180s, and 190s, but zero 200s. Those who got lucky and scored a 200 on the I.Q. test would, on average, score 10 points closer to the mean upon retaking the test.

 

You are correct to point to the shape of the distribution as a relevant factor in regression toward the mean. However, your take on measurement error seems to miss what this discussion is about. Someone who gets a high score on an I.Q. test will, on average, obtain a slightly lower score upon being retested. This phenomenon could not take place unless there was measurement error.

 

As for the height example, I'm not "attempting to restate" anything. I was merely pointing out the rather obvious fact that if a height measurement system involved error, those who obtained extreme height measurements would appear to regress toward the mean upon being remeasured. The height example you're thinking of is important, but not relevant to the discussion of this particular statistical phenomenon.

Link to comment
Share on other sites

And that's because your simulation is entirely !@#$ed up; as I keep saying, you're measuring the regression of the normally distributed error, but over a data set that you've managed choose in such a half-assed manner that you're not even doing that right.  You're choosing a threshhold so that your overall error is overwhelmingly giving a net positive bias...which means the regression toward the mean of the error you've selected for is of course going to have a negative bias, which you mistakenly attribute to the normal distribution of the IQ scores itself.  This is because YOU HAVE NO IDEA WHAT YOU'RE !@#$ING DOING! 

I'm sorry, but I know exactly what I'm doing. Yes, the subset I'm retesting has an overwhelming net positive error bias. You think I don't know that? When I retest the Threshold members, that net positive error bias goes away; which causes the scores of the Threshold members to mildly regress toward the mean.

 

But guess what? The same thing happens in real life. People who've obtained above-the-mean scores on I.Q. tests are disproprortionately lucky; while those who've obtained below-the-mean scores are disproportionately unlucky. Ask a group of people who obtained a high score on an I.Q. test to retake it, and guess what? The group's net luck goes away, and their average scores fall a little. The more luck based the test, the more strongly this group will regress toward the mean upon being retested.

Link to comment
Share on other sites

I'm sorry, but I know exactly what I'm doing. Yes, the subset I'm retesting has an overwhelming net positive error bias. You think I don't know that? When I retest the Threshold members, that net positive error bias goes away; which causes the scores of the Threshold members to mildly regress toward the mean.

 

But guess what? The same thing happens in real life. People who've obtained above-the-mean scores on I.Q. tests are disproprortionately lucky; while those who've obtained below-the-mean scores are disproportionately unlucky. Ask a group of people who obtained a high score on an I.Q. test to retake it, and guess what? The group's net luck goes away, and their average scores fall a little. The more luck based the test, the more strongly this group will regress toward the mean upon being retested.

859111[/snapback]

 

BUT LUCK IS NOT ERROR, YOU IDIOT.

 

You don't know what you're doing, simply because you have absolutely no idea what "error" is. Which is why you don't know you're measuring it, which is why you think you're getting a result that you are IN NO WAY getting.

Link to comment
Share on other sites

BUT LUCK IS NOT ERROR, YOU IDIOT.

 

You don't know what you're doing, simply because you have absolutely no idea what "error" is.  Which is why you don't know you're measuring it, which is why you think you're getting a result that you are IN NO WAY getting.

859135[/snapback]

Without measurement error, there could be no luck in taking the test. Someone with an I.Q. of 130 would always score a 130 on an I.Q. test. They'd never get lucky and score a 140, nor unlucky and score a 120.

Link to comment
Share on other sites

Your post is intelligently written, eloquently expressed, but nonetheless wrong.

 

For instance, a normally distributed population is not required for regression toward the mean to take place. Consider a population where everyone had an I.Q. of 100; and where people took an error-prone I.Q. test. Those who scored significantly above or below the mean on their first I.Q. test would, on average, appear to regress toward the mean upon retaking the test.

 

I agree that regression toward the mean wouldn't take place in a uniformly distributed population; except at the very extreme edges. For example, if there were equal numbers of people with I.Q.s of 140, 150, and 160, then a score of 150 on an I.Q. test would be just as likely to indicate an unlucky 160 as a lucky 140. Therefore, someone who scored a 150 on an I.Q. test the first time around would, on average, score a 150 upon retaking the test. Regression toward the mean would still exist at the extreme edge of this distribution. Suppose there were equal numbers of 170s, 180s, and 190s, but zero 200s. Those who got lucky and scored a 200 on the I.Q. test would, on average, score 10 points closer to the mean upon retaking the test.

 

You are correct to point to the shape of the distribution as a relevant factor in regression toward the mean. However, your take on measurement error seems to miss what this discussion is about. Someone who gets a high score on an I.Q. test will, on average, obtain a slightly lower score upon being retested. This phenomenon could not take place unless there was measurement error.

 

As for the height example, I'm not "attempting to restate" anything. I was merely pointing out the rather obvious fact that if a height measurement system involved error, those who obtained extreme height measurements would appear to regress toward the mean upon being remeasured. The height example you're thinking of is important, but not relevant to the discussion of this particular statistical phenomenon.

859106[/snapback]

:):)

Link to comment
Share on other sites

What little idiot brain cell rattling around in that confused skull of yours gave you that ridiculous idea?

859195[/snapback]

 

That idea makes perfect sense. When i go to the roulette table in vegas, and bet the house on 11, and 11 comes up on the wheel, i didnt win because i was lucky, i won because there was error in the wheel. duh! :w00t:

Link to comment
Share on other sites

Without measurement error, there could be no luck in taking the test.

859157[/snapback]

 

And this is why you have been completely wrong across 50 pages and multiple threads. how the hell can you even make it LOOK like you understand stats (which you dont) when you cant tell the difference between luck and error.

 

My God you are dense.

Link to comment
Share on other sites

That idea makes perfect sense. When i go to the roulette table in vegas, and bet the house on 11, and 11 comes up on the wheel, i didnt win because i was lucky, i won because there was error in the wheel. duh!  :w00t:

859320[/snapback]

Are you really this dense? I wrote that without measurement error, there could be no luck in taking an I.Q. test. Someone with an I.Q. of 130 would always score a 130 on the test--both the lucky 140 possibility and unlucky 120 possibility are precluded.

 

Your Vegas table has precisely zero relevance to this. The purpose of a Vegas table isn't to measure height, or I.Q., or anything else really. It's simply a random number selection device.

 

My point--which apparently sailed right over your head--is that if someone is being measured for the same quality twice, regression toward the mean will tend to occur the second time around, but only if there's measurement error in the underlying test.

Link to comment
Share on other sites

My point--which apparently sailed right over your head--is that if someone is being measured for the same quality twice, regression toward the mean will tend to occur the second time around, but only if there's measurement error in the underlying test.

859519[/snapback]

 

Only if the range of error is the same as the range of possible test scores...because, again, all you're describing is how the error evolves. The error and the test are two completely different things.

 

It's also irrelevent to the topic that started this nonsense: testing the IQ of the same person twice is not the same as testing the IQ of children and comparing it to the IQ of their parents...something else that you are yet again too stupid to understand.

 

This all gets back - again - to the simple fact that you don't know what measurement, error, regression, mean, or variance mean.

Link to comment
Share on other sites

Only if the range of error is the same as the range of possible test scores...because, again, all you're describing is how the error evolves.  The error and the test are two completely different things.

 

It's also irrelevent to the topic that started this nonsense: testing the IQ of the same person twice is not the same as testing the IQ of children and comparing it to the IQ of their parents...something else that you are yet again too stupid to understand. 

 

This all gets back - again - to the simple fact that you don't know what measurement, error, regression, mean, or variance mean.

859534[/snapback]

Suppose someone scores a 750 on the math section of the SAT. That person, upon retaking the test, will generally score a 725; because of regression toward the mean.

 

Suppose that two people each score a 750 on the math section of the SAT. Instead of retaking the test and getting 725s, they decide to have children. Suppose math ability is 100% determined by genetics. What math scores should we expect from the couple's children? 725s. It's not that the children's math ability is any closer to the mean than their parents', it's that the parents got a little lucky when they took the test.

 

Your inability to grasp this point directly led to the hijacking of a number of threads, and a completely unreasonable number of pages of discussion.

Link to comment
Share on other sites

Suppose someone scores a 750 on the math section of the SAT. That person, upon retaking the test, will generally score a 725; because of regression toward the mean.

 

Suppose that two people each score a 750 on the math section of the SAT. Instead of retaking the test and getting 725s, they decide to have children. Suppose math ability is 100% determined by genetics. What math scores should we expect from the couple's children? 725s. It's not that the children's math ability is any closer to the mean than their parents', it's that the parents got a little lucky when they took the test.

 

Your inability to grasp this point directly led to the hijacking of a number of threads, and a completely unreasonable number of pages of discussion.

859607[/snapback]

 

Stop thinking I don't understand your point. I DO understand your point. It's just utterly, completely, tragically wrong. :w00t:

Link to comment
Share on other sites

×
×
  • Create New...