Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 It's time to slam the door on this so-called debate. The following quote is from a statistics textbook, and should settle the issue once and for all. Any empirical measurement of a characteristic is composed of two parts--the true value of the characteristic plus or minus some error. On repeated measurements the true value remains the same but the error component fluctuates. We knw that when we measure a large number of objects with respect to a characteristic, some of the objects will score high, some low, and some in between. Now wherever the object scores, part of the score is due to an error component. Thinking in terms of conditional probabilities, we can ask ourselves whether those objects that scored exceptionally high were not benefiting from a large error component; and similarly, those that scored exceptionally low, were in receipt of a large negative error component. Cast in a different light, suppose we knew only the size of the object's error component. What would we predict as the object's total score if we knew it had an exceptionally large positive error component--would it tend to be above or below the mean. The dynamics of this phenomenon become apparent when we remeasure our set of objects on the same characteristic and compare their respective values on the two measurements. What we find is that those that scored exceptionally high (or low) on the first measurement score closer to the mean on the second measure; that is, there is regression to the mean. The greater the error component, the greater will be the regression or "turning back" to the mean. This phenomenon is worth bearing in mind whenever exceptional scores on a single measurement are singled out for attention . . . Consider . . . a mutual . . . fund that boasts having the best performance of all the leading investment funds during the most recent year. We should not be too impressed with this performance. After all, of the many funds, . . . one of them had to do better than all the others. This is a truism. Again, we want to know the reliability of this performance. Will it duplicate its performance next year, or will another fund claim the leadership role, while the other regresses toward the mean.
Bungee Jumper Posted December 20, 2006 Posted December 20, 2006 It's time to slam the door on this so-called debate. The following quote is from a statistics textbook, and should settle the issue once and for all. 872921[/snapback] Wow. As impressed as I am that you finally read a book, you managed to find a textbook that's actually more wrong than you are. Any of the financial experts out there want to discuss the fallacy of "error" in exceptional mutual fund performance?
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 Wow. As impressed as I am that you finally read a book, you managed to find a textbook that's actually more wrong than you are. Any of the financial experts out there want to discuss the fallacy of "error" in exceptional mutual fund performance? 873003[/snapback] Let me get this straight. First you were saying that you were right, and that I was wrong. Then you started saying that you were right, and Hyperstats was wrong. Now you're saying that you're right, and a statistics textbook is wrong. I'm not even going to debate the mutual fund example. If you care to read up on the subject, you'll find that mutual funds that obtain an exceptionally good performance in one period tend to move toward the industry average in subsequent periods.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 Just to drive another nail into the coffin of this debate, I've provided a link from Tufts. Suppose you were told that when any group of subjects with low values on some measurement is later remeasured, their mean value will increase without the use of any treatment or intervention. Would this worry you? It sure had better! If this were true and an ineffective treatment were applied to a such a group, the increase might be interpreted improperly as a treatment effect. This could result in the costly implementation of ineffective programs or faulty public policies that block the development of real solutions for the problem that was meant to be addressed. The behavior described in the first paragraph is real. It is called the regression effect. Unfortunately, the misinterpretation of the regression effect described in the second paragraph is real, too. It is called the regression fallacy. The regression effect is shown graphically and numerically in the following series of plots and computer output.
Bungee Jumper Posted December 20, 2006 Posted December 20, 2006 I'm saying you're wrong, because you can't read for sh--. I'm not even going to debate the mutual fund example. If you care to read up on the subject, you'll find that mutual funds that obtain an exceptionally good performance in one period tend to move toward the industry average in subsequent periods. 873061[/snapback] And according to you, that's because of error. You still can't distinguisy between "error" and population variance, which is the entire crux of your total idiocy. Error is error. Population variance is population variance. They are not the same thing.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 For those who don't like Tufts, here's a quote from Berkeley In most test-retest situations, the correlation between scores on the test and scores on the re-test is positive, so individuals who score much higher than average on one test tend to score above average, but closer to average, on the other test. . . . Similarly, individuals who are much lower than average in one variable tend to be closer to average in the other (but still below average). Those who perform best usually do so with a combination of skill (which will be present in the retest) and exceptional luck (which will likely not be so good in a retest). Those who perform worst usually do so as the result of a combination of lack of skill (which still won't be present in a retest) and bad luck (which is likely to be better in a retest). . . . A particularly high score could have come from someone with an even higher true ability, but who had bad luck, or someone with a lower true ability who had good luck. Because more individuals are near average, the second case is more likely; when the second case occurs on a retest, the individual's luck is just as likely to be bad as good, so the individual's second score will tend to be lower. The same argument applies, mutatis mutandis, to the case of a particularly low score on the first test.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 Here's a quote from the EPA: Regression Effects The tendency of subjects, who are initially selected due to extreme scores, to have subsequent scores move inward toward the mean. Also known as statistical regression/regression to the mean/regression fallacy.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 Here's a quote from the University of Washington. Regression effect: in almost all test-retest situations- The bottom group on the first test will on average show some improvement on the second test - The top group on the first test will do a bit worse on the second test - Regression fallacy: thinking that the regression effect must be due to something important, not just spread around the SD line.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 To remove any doubt in the matter, here's something from The University of Chicago. If two successive trait measurements have a less-than-perfect correlation, individuals or populations will, on average, tend to be closer to the mean on the second measurement (the so-called regression effect). Thus, there is a negative correlation between an individual's state at time 1 and the change in state from time 1 to time 2. In addition, whenever groups differ in their initial mean values, the expected change in the mean value from time 1 to time 2 will differ among the groups. For example, birds feeding nestlings lose weight, but initially heavier birds lose more weight than lighter birds, a result expected from the regression effect. In sexual selection, males who remain unmated in the first year are, on average, less attractive than mated males. The regression effect predicts that these males will increase their attractiveness in the second year more than mated males.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 Just in case Bungee Jumper is preparing to question the intelligence and statistical knowledge of all the sources I've quoted thus far, here's a quote from Stanford Preliminary question: Let be i’th student’s scores on quizzes 1 and 2. Suppose X and Y have mean 100, SD 15, and correlation r=.6A claim: (i) “The scores of people above average on the first test will drop overall by 5 on the second test” (ii)“The scores of people below average on the first test will rise overall by 5 on the second test” Is this right? Is there a reason? Or just chance? [substitutes: IQ tests, mutual fund returns] . . . 4. THE CHANCE ERROR MODEL Return to the two quizzes example. This is another example of the regression effect. It is a consequence of correlation r being less than 1. Indeed, the regression fallacy occurs when you argue that there is some substantive reason other than chance variation going on. Here is another way to think about the regression effect. In the test-retest situation, we make a model that Y = T + e (test score = true score + chance error). Assume the true scores follow the normal density curve, mean 100, SD 15. Suppose the chance error is as likely to be positive as negative, and is around 5 in size: e (0, 5) (For simplicity, could imagine that e is either +5 or –5 with 50-50 chance.) Take people who scored 140 on the test. Two possibilities: • a) true score below 140, positive chance error (T < 140, + error) e.g. 135+5 • b) true score above 140, negative chance error ( T > 140, - error) e.g. 145-5 A plot of the normal curve shows that the first explanation is more likely – the true score is most likely lower, and so on average, the scores on the second test will be a bit lower than the first.
ieatcrayonz Posted December 20, 2006 Posted December 20, 2006 In sexual selection, males who remain unmated in the first year are, on average, less attractive than mated males. The regression effect predicts that these males will increase their attractiveness in the second year more than mated males. So what are you doing? Make up? Working out? ? If you're Canadian you might want to look up "toothbrush" on google. You can order them on line even if you can't buy one in Canada. How's it working?
Bungee Jumper Posted December 20, 2006 Posted December 20, 2006 Here's a quote from the University of Washington.Regression effect: in almost all test-retest situations- The bottom group on the first test will on average show some improvement on the second test - The top group on the first test will do a bit worse on the second test - Regression fallacy: thinking that the regression effect must be due to something important, not just spread around the SD line. 873117[/snapback] The definition of "regression fallacy" is FAR more relevant to your bull sh-- than "regression effect". Which is not the same as "regression toward the mean" anyway. All you've done is change your argument again in a feeble and futile attempt to seem as though you have even the merest clue what you're talking about.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 In honor of the Bills' secondary, here's a quote from Ohio State Regression Effect: In virtually all test-retest situations, the bottom group on the first test will on average show some improvement on the second test and the top group will on average fall back. This effect is known as the regression effect. Regression Fallacy: The regression fallacy is thinking that the regression effect must be due to something important, not just due to spread about the regression line.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 The definition of "regression fallacy" is FAR more relevant to your bull sh-- than "regression effect". Which is not the same as "regression toward the mean" anyway. All you've done is change your argument again in a feeble and futile attempt to seem as though you have even the merest clue what you're talking about. 873154[/snapback] The objections you're attempting to raise have already been answered by the quotes I've provided. For example, the I.Q. test/retest example I repeatedly provided (and which you repeatedly ridiculed) appears in the Stanford quote. I've won this debate, so there's no use for you to continue to argue.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 Here's a little something from UCLA. Regression fallacy--test-retest 1. Observed values are a combination of true score and chance error. 2. Chance is bidirectional--sometimes pushing a score one way and sometimes another 3. If you measure something and the score has a large negative chance error, chances are that the second time you measure it, the chance error will be closer to the mean 4. This implies that in test-retest situations, individuals who are outliers in the first testing will simply by chance tend to score closer to the mean on second testing. 5. The book refers to it as the regression effect; elsewhere it is called regression to the mean
Bungee Jumper Posted December 20, 2006 Posted December 20, 2006 The objections you're attempting to raise have already been answered by the quotes I've provided. For example, the I.Q. test/retest example I repeatedly provided (and which you repeatedly ridiculed) appears in the Stanford quote. I've won this debate, so there's no use for you to continue to argue. 873166[/snapback] No, dumbass, the Stanford quote says exactly what I've been saying: the error regresses toward the mean of the error, not the mean of the population. You just can't read.
Orton's Arm Posted December 20, 2006 Author Posted December 20, 2006 No, dumbass, the Stanford quote says exactly what I've been saying: the error regresses toward the mean of the error, not the mean of the population. You just can't read. 873256[/snapback] Since you didn't understand the Stanford example the first time around, here it is again Here is another way to think about the regression effect. In the test-retest situation, we make a model . . . Take people who scored 140 on the test. Two possibilities: • a) true score below 140, positive chance error (T < 140, + error) e.g. 135+5 • b) true score above 140, negative chance error ( T > 140, - error) e.g. 145-5 A plot of the normal curve shows that the first explanation is more likely – the true score is most likely lower, and so on average, the scores on the second test will be a bit lower than the first. Precisely what portion of the bolded text are you too stupid to understand? It's coming from Stanford, so you can't use your usual technique of discrediting the source. It's saying that someone who scored a 140 on an I.Q. test is more likely to be a lucky 135 than an unlucky 145. Therefore, according to the Stanford example, someone who scored a 140 on the first I.Q. test is expected to obtain a slightly lower score upon being retested. Even you, with your talent for creating the appareance of difference where there is no difference, will find it difficult to make the Stanford example seem different from the I.Q. test example I've repeatedly offered, and which you've repeatedly ridiculed. I've won this debate. Hopefully you will learn to be a little less arrogant, Ramius a little less of a loudmouth, and Coli a little less holier than thou. But I have to admit that I don't have much hope for significant character improvements for any of the three of you. Barring that, the next time you call me an idiot, I want the rest of the board to remember this statistical debate. I was right, and you were wrong; and this took place for over 50 pages. In the end, I won. So many highly credible sources supported what I've written about the test/retest situation that you can't possibly hope to discredit them all. The Stanford example is so ridiculously similar to what I've been saying that you won't be able to fool people into thinking that it supports you rather than me.
Ramius Posted December 20, 2006 Posted December 20, 2006 No, dumbass, the Stanford quote says exactly what I've been saying: the error regresses toward the mean of the error, not the mean of the population. You just can't read. 873256[/snapback] Its pointless. He still cant even understand what he's reading, no does he know the proper statistical definition. You've explained the above statement to him hundreds of times, and he still cant comprehend it. He isnt going to any time soon. Until he can successfully learn the definitions of error and variance, theres no hope.
Ramius Posted December 20, 2006 Posted December 20, 2006 I've won this debate. 873280[/snapback] Only in your pathetic little mind. You won this debate just like your boner buddy holcomb won the starting job over Losman in august.
Bungee Jumper Posted December 20, 2006 Posted December 20, 2006 Since you didn't understand the Stanford example the first time around, here it is again Precisely what portion of the bolded text are you too stupid to understand? It's coming from Stanford, so you can't use your usual technique of discrediting the source. It's saying that someone who scored a 140 on an I.Q. test is more likely to be a lucky 135 than an unlucky 145. Therefore, according to the Stanford example, someone who scored a 140 on the first I.Q. test is expected to obtain a slightly lower score upon being retested. Even you, with your talent for creating the appareance of difference where there is no difference, will find it difficult to make the Stanford example seem different from the I.Q. test example I've repeatedly offered, and which you've repeatedly ridiculed. 1) It's coming from a Stanford web page, which makes it a little easier to ridicule. 2) It contains no math, which makes it a LOT easier to ridicule. 3) I don't need to ridicule it. You simply don't understand what that's saying and what it means: in a test that has a certain measure of error, people with extreme amounts of error will have less extreme amounts of error upon retesting as the error regresses to the mean. This is not the same as regression to the mean of the population. That's what everything you've linked to today has said. That's what I've said. That's ENTIRELY different from what you've been saying. All you've proved today is that 1) you still can't distinguish between variance and error, and 2) you can't read. I've won this debate. Hopefully you will learn to be a little less arrogant, Ramius a little less of a loudmouth, and Coli a little less holier than thou. But I have to admit that I don't have much hope for significant character improvements for any of the three of you. Barring that, the next time you call me an idiot, I want the rest of the board to remember this statistical debate. I was right, and you were wrong; and this took place for over 50 pages. In the end, I won. So many highly credible sources supported what I've written about the test/retest situation that you can't possibly hope to discredit them all. The Stanford example is so ridiculously similar to what I've been saying that you won't be able to fool people into thinking that it supports you rather than me. 873280[/snapback] This is why I keep this topic going. Your delusions entertain me.
Recommended Posts