Jump to content

A peer-reviewed study about Wikipedia's accuracy


Recommended Posts

Actually, I brought "dice" into it, because a system of multiple dice accurately represents a statistical distribution that demonstrates (as I've demonstrated TWICE now) that regression toward the mean is due to the statistical variance of the distribution and not error of the measurement. You then responded with the example of a "die", which has a "true average roll" of 3.5, which you roll once to attempt to measure its average, thereby causing the die to be in error by the difference between the face value of the die and it's "true average roll" of 3.5. (Never mind that that difference isn't error, it's actually VARIANCE :rolleyes:). You then DENIED THAT YOU EVER SAID IT! Now you're saying that you said it, but you didn't mean it.

 

And this is where DonteWhitner's wrong: you're not repeating yourself. Every post is something new. You keep changing your nonsense theory over and over whenever you get backed into a corner and can't weasel out of being stupid anymore. That's why I find it so amusing...you have found more ways to be completely and utterly incorrect than I even knew were possible. Hell, you'd think by now you would have said something right just out of dumb luck... <_<

It's messed up posts like the above which caused our discussion to drag on as long as it did.

 

In your dice rolling example, regression toward the mean happens the initial results were obtained by random chance. An extreme value on an initial roll is likely to be followed by regression toward the mean on a subsequent roll.

 

Now let's say test results are entirely dependent on something innate, such as height. If Joe measures 6'5" on Monday, odds are he'll measure 6'5" when you remeasure him on Wednesday.

 

I.Q. tests combine the qualities of the above two. Results are mostly based on something innate, in that your score on an initial test is a strong predictor of how well you'll do on a retest. But there's measurement error, which introduces an element of random chance into I.Q. test scores. Someone who obtains an above-average score on an initial I.Q. test is likely to be luckier than average, as well as smarter than average. On being retested, the luck portion of the score disappears, while the innate part remains. It's the disappearance of the luck part which causes test scores to move toward the population's mean in a test/retest situation.

 

In my die rolling analogy, you've arbitrarily defined a die's "true" value as 3.5, and you attempt to measure this value by rolling it a single time. Hence, any roll will result in some measurement error, with extreme rolls entailing the greatest degree of error.

Link to comment
Share on other sites

  • Replies 395
  • Created
  • Last Reply

Top Posters In This Topic

The real reasons you wont admit your alma mater is 1) its somewhere that is extremely easy to make fun of, and 2)that you're afraid someone here is going to forward your moronic ideas to your university president, and they'll rescind your diploma.

I'm hopeful my alma mater isn't as intolerant of intellectual freedom as you imply. And while my alma mater is pretty similar to most other top-50 schools, I've no doubt that you and Bungee Jumper would make fun of it anyway.

Link to comment
Share on other sites

It's messed up posts like the above which caused our discussion to drag on as long as it did.

 

In your dice rolling example, regression toward the mean happens the initial results were obtained by random chance. An extreme value on an initial roll is likely to be followed by regression toward the mean on a subsequent roll.

 

Now let's say test results are entirely dependent on something innate, such as height. If Joe measures 6'5" on Monday, odds are he'll measure 6'5" when you remeasure him on Wednesday.

 

I.Q. tests combine the qualities of the above two. Results are mostly based on something innate, in that your score on an initial test is a strong predictor of how well you'll do on a retest. But there's measurement error, which introduces an element of random chance into I.Q. test scores. Someone who obtains an above-average score on an initial I.Q. test is likely to be luckier than average, as well as smarter than average. On being retested, the luck portion of the score disappears, while the innate part remains. It's the disappearance of the luck part which causes test scores to move toward the population's mean in a test/retest situation.

 

In my die rolling analogy, you've arbitrarily defined a die's "true" value as 3.5, and you attempt to measure this value by rolling it a single time. Hence, any roll will result in some measurement error, with extreme rolls entailing the greatest degree of error.

 

No, it's messed up explanations like this that cause the discussion to drag. YOU DON'T ROLL A DIE TO MEASURE ITS AVERAGE VALUE! There is no error of measurement in a die. It's a process for selecting a discrete value from a statistical distribution. It's not even a measurement. There is not one concept in that example that you got right. And you're still clinging to it, even after you denied every saying it! :rolleyes:

 

Admit that it's a really bad example. Then maybe you'll start understanding. Not before. Until you do, you'll never comprehend that variance is not error, and therefore since variance causes regression, it is not error that causes regression.

Link to comment
Share on other sites

I'm hopeful my alma mater isn't as intolerant of intellectual freedom as you imply. And while my alma mater is pretty similar to most other top-50 schools, I've no doubt that you and Bungee Jumper would make fun of it anyway.

 

Intellectual freedom - again, not an asset when you're always wrong.

Link to comment
Share on other sites

I'm hopeful my alma mater isn't as intolerant of intellectual freedom as you imply. And while my alma mater is pretty similar to most other top-50 schools, I've no doubt that you and Bungee Jumper would make fun of it anyway.

 

Based on your posts, I'm guessing by intellectual freedom you mean the freedom to be an incompetent moron. But either way, I'm sure Dean Ronald McDonald is intolerant of your idiocy.

Link to comment
Share on other sites

I'm hopeful my alma mater isn't as intolerant of intellectual freedom as you imply. And while my alma mater is pretty similar to most other top-50 schools, I've no doubt that you and Bungee Jumper would make fun of it anyway.

 

Big whoop if we make fun of it. You know what? If it makes you feel better, you can pick on my alma mater, the University of Illinois.

Link to comment
Share on other sites

No, it's messed up explanations like this that cause the discussion to drag. YOU DON'T ROLL A DIE TO MEASURE ITS AVERAGE VALUE! There is no error of measurement in a die. It's a process for selecting a discrete value from a statistical distribution. It's not even a measurement. There is not one concept in that example that you got right. And you're still clinging to it, even after you denied every saying it! :rolleyes:

 

Admit that it's a really bad example. Then maybe you'll start understanding. Not before. Until you do, you'll never comprehend that variance is not error, and therefore since variance causes regression, it is not error that causes regression.

 

Once again, he cant distinguish the difference between a single event and a distribution of events. But then again, he appears to have trouble with the word "different".

 

Lets try something HA. This might help you define "different." We'll use an example from your school days. A big mac and chicken mcnuggets are different. As in NOT THE SAME. Chicken McNuggets and the Chicken deluxe sandwich could be the same, because they are both chicken. But The chicken sandwich and the Quarter pounder are Different. Are we getting anywhere?

Link to comment
Share on other sites

Big whoop if we make fun of it. You know what? If it makes you feel better, you can pick on my alma mater, the University of Illinois.

 

He's welcome to make fun of either one of my schools. Clarkson U or Florida St.

Link to comment
Share on other sites

No, it's messed up explanations like this that cause the discussion to drag. YOU DON'T ROLL A DIE TO MEASURE ITS AVERAGE VALUE!

You give a person one I.Q. test to estimate what their score would be if given 1000 tests. You give an NFL prospect just one 40 yard dash test to see how he'd do if asked to run the 40 yard dash 1000 times. To make the die rolling example analogous to an I.Q. test, you have to roll a die once to measure its average value.

 

But getting back to the point you were making with your own die rolling example--I'll agree that rolling a pair of dice is a good process for selecting a discrete value from a statistical distribution. So what?

Link to comment
Share on other sites

Once again, he cant distinguish the difference between a single event and a distribution of events. But then again, he appears to have trouble with the word "different".

 

Lets try something HA. This might help you define "different." We'll use an example from your school days. A big mac and chicken mcnuggets are different. As in NOT THE SAME. Chicken McNuggets and the Chicken deluxe sandwich could be the same, because they are both chicken. But The chicken sandwich and the Quarter pounder are Different. Are we getting anywhere?

 

Ooooh, good example. Is a McChicken Sandwich a distribution, whereas a McNugget is discrete? Or is a McChicken Sandwich just a McNugget with lots of error?

Link to comment
Share on other sites

You give a person one I.Q. test to estimate what their score would be if given 1000 tests. You give an NFL prospect just one 40 yard dash test to see how he'd do if asked to run the 40 yard dash 1000 times. To make the die rolling example analogous to an I.Q. test, you have to roll a die once to measure its average value.

 

Which just proves it's a really stupid analogy. It also proves that you don't know sh-- about measurement or error.

 

But getting back to the point you were making with your own die rolling example--I'll agree that rolling a pair of dice is a good process for selecting a discrete value from a statistical distribution. So what?

 

:rolleyes:<_<

 

Because it's a statistical distribution with NO ERROR. Thus, it demonstrates how error is NOT responsible for regression, but VARIANCE is.

Link to comment
Share on other sites

Ooooh, good example. Is a McChicken Sandwich a distribution, whereas a McNugget is discrete? Or is a McChicken Sandwich just a McNugget with lots of error?

 

I think you are getting a bit confused. There should be 3.5 nuggets every single time. The 6 piece nugget is in error, and the 9 piece nugget is more in error. We're not even going to get into the statistics behind the 20 piece nugget. Actually, the most correct order of McNuggets at McDonalds is the Nugget happy meal, which has 4 nuggets.

 

The McChicken sandwich is actually the precursor to a mcnugget. In McDonalds case, The McNugget has roughly 30-40% heritability of the chicken trait with respect to its parent, the chicken sandwich. And that explains the size difference between the 2.

Link to comment
Share on other sites

I think you are getting a bit confused. There should be 3.5 nuggets every single time. The 6 piece nugget is in error, and the 9 piece nugget is more in error. We're not even going to get into the statistics behind the 20 piece nugget. Actually, the most correct order of McNuggets at McDonalds is the Nugget happy meal, which has 4 nuggets.

 

No, no, no, no, no. When you order Chicken McNuggets, you're actually trying to measure the true average number of nuggets in an order, which is (4+6+10+20)/4 = 10 nuggets, not 3.5. So any time you order 4, 6, or 20 McNuggets, you're actually wrong, and regression toward the mean will next time force you to order 10.

 

Now interestingly enough, you'll notice that the true average number of nuggets actually exists in the statistical nugget distribution. This makes me think that, while HA may have attended Hamburger U, he may not have graduated. :cry:

 

The McChicken sandwich is actually the precursor to a mcnugget. In McDonalds case, The McNugget has roughly 30-40% heritability of the chicken trait with respect to its parent, the chicken sandwich. And that explains the size difference between the 2.

 

That would also explain why the McChicken sandwich tastes 2-3 times more like chicken than a McNugget.

 

Is it wrong that the above made sense to me? :rolleyes: I mean, it's still wrong...but I understood it. <_<

Link to comment
Share on other sites

Now interestingly enough, you'll notice that the true average number of nuggets actually exists in the statistical nugget distribution. This makes me think that, while HA may have attended Hamburger U, he may not have graduated. :rolleyes:

 

Now let's be fair. He could have been in a specialized curriculum that didn't deal with chicken.

Link to comment
Share on other sites

Big whoop if we make fun of it. You know what? If it makes you feel better, you can pick on my alma mater, the University of Illinois.

I don't have any deep needs that would be fulfilled by making fun of your school. So I'll pass on your offer. And I'm not that worried about people making fun of my school. Except that, based on my lack of respect for some of the people on here, I do feel it could carry over into the hiring process. And I don't want to do that to my fellow graduates.

Link to comment
Share on other sites

Which just proves it's a really stupid analogy. It also proves that you don't know sh-- about measurement or error.

:rolleyes:<_<

 

Because it's a statistical distribution with NO ERROR. Thus, it demonstrates how error is NOT responsible for regression, but VARIANCE is.

Your die rolling example proves that when initial results are determined entirely by random chance, an extreme value on the initial test is likely to be followed by a value closer to the mean. The Hyperstats article made the same point.

 

I.Q. test scores are determined mostly by something innate (there's a strong correlation between test and retest) but also somewhat by random chance (scores vary somewhat from test to retest). To the extent that I.Q. scores are due to random chance, extreme scores on an initial test are likely to regress toward the population's mean on a retest. And yes, the element of random chance for an I.Q. test does represent measurement error.

Link to comment
Share on other sites

No, no, no, no, no. When you order Chicken McNuggets, you're actually trying to measure the true average number of nuggets in an order, which is (4+6+10+20)/4 = 10 nuggets, not 3.5. So any time you order 4, 6, or 20 McNuggets, you're actually wrong, and regression toward the mean will next time force you to order 10.

 

Now interestingly enough, you'll notice that the true average number of nuggets actually exists in the statistical nugget distribution. This makes me think that, while HA may have attended Hamburger U, he may not have graduated. :cry:

That would also explain why the McChicken sandwich tastes 2-3 times more like chicken than a McNugget.

 

Is it wrong that the above made sense to me? :rolleyes: I mean, it's still wrong...but I understood it. <_<

I don't have to read this post to know what it feels like to try to teach statistics to an unruly group of ten year olds.

Link to comment
Share on other sites

I don't have any deep needs that would be fulfilled by making fun of your school. So I'll pass on your offer. And I'm not that worried about people making fun of my school. Except that, based on my lack of respect for some of the people on here, I do feel it could carry over into the hiring process. And I don't want to do that to my fellow graduates.

 

Carry over to the hiring process?!? OMG that is effin lame :rolleyes:

Link to comment
Share on other sites

×
×
  • Create New...