Jump to content

Err America files Chapter 11


KD in CA

Recommended Posts

Please don't tell me you talk about TBD at work.  :flirt:  :wub:

827568[/snapback]

 

No. Just this statistics topic. Ever since I burst out laughing at that "regression toward the mean is error" comment, they've wanted regular updates...

 

Yeah, it's kind of sad. But they're statisticians...they don't have much... :(

Link to comment
Share on other sites

  • Replies 598
  • Created
  • Last Reply

Top Posters In This Topic

No.  Just this statistics topic.  Ever since I burst out laughing at that "regression toward the mean is error" comment, they've wanted regular updates...

 

Yeah, it's kind of sad.  But they're statisticians...they don't have much...  :wub:

827570[/snapback]

 

:flirt:

Link to comment
Share on other sites

You measured "regression toward the mean" as a function of error when the same people take the same test multiple times.

 

Now read very carefully, the next part is very important:

 

YOU MEASURED THE WRONG MEASURABLE.

 

And even beyond that, your math was all sorts of !@#$ed up...but let's deal with the bigger issue of understanding the actual problem first: even if we presume the stated Wikipedia equation is correct (it's not, as I explained earlier), and even if we presume your definition of "heritability" as used in that equation is correct (it's not, for reasons you've already established you can't begin to understand), and even if we assume IQ is an adequate measurable (it's not, which is why not even the studies you've quoted use it)m none of it has anything to do with the same people taking IQ tests more than once.  You're measuring the variance between multiple instances of the same thing - a person's IQ test.  You're SUPPOSED to be measuring the variance between individual instances of different things - parents' and children's IQs. 

 

Basically, you simulated the wrong thing.  You spectacularly !@#$ed up the problem.  I can't wait to share this one with the statisticians at work...  :flirt:

827559[/snapback]

I'll overlook the tone of your post, and focus on the more substansive portion. First off, I agree with your opening sentence. I indeed measured regression toward the mean as a function of error when the same people take the test multiple times. My Monte Carlo simulation shows that people with high measured I.Q.s are, on average, a little less intelligent than their scores make them appear. Do you agree so far?

 

Now let's turn to the topic of children. Suppose two people with high measured I.Q.s decide to have kids. My simulation demonstrated that, on average, these measured I.Q.s will slightly overstate the true intelligence level of the parents. The children's I.Q.s are a function of the parents' true I.Q.s, not their measured I.Q.s. When you go to measure the children's I.Q.s (the functional equivalent of the second I.Q. test in my simulation) you'll find their measured I.Q.s are closer to the mean than the measured I.Q.s of their parents. This is because the Threshold parents (first test) group had measured I.Q.s scores that mildly overstated their true intelligence, while the Threshold children (second test) had measured I.Q. scores that, on average, stated their intelligence more or less correctly.

 

Assuming you're foolish enough to accurately communicate the details of this discussion to your colleagues, I think you'll find the more insightful will agree with me.

Link to comment
Share on other sites

I'll overlook the tone of your post, and focus on the more substansive portion. First off, I agree with your opening sentence. I indeed measured regression toward the mean as a function of error when the same people take the test multiple times. My Monte Carlo simulation shows that people with high measured I.Q.s are, on average, a little less intelligent than their scores make them appear. Do you agree so far?

 

Now let's turn to the topic of children. Suppose two people with high measured I.Q.s decide to have kids. My simulation demonstrated that, on average, these measured I.Q.s will slightly overstate the true intelligence level of the parents. The children's I.Q.s are a function of the parents' true I.Q.s, not their measured I.Q.s. When you go to measure the children's I.Q.s (the functional equivalent of the second I.Q. test in my simulation) you'll find their measured I.Q.s are closer to the mean than the measured I.Q.s of their parents. This is because the Threshold parents (first test) group had measured I.Q.s scores that mildly overstated their true intelligence, while the Threshold children (second test) had measured I.Q. scores that, on average, stated their intelligence more or less correctly.

 

Assuming you're foolish enough to accurately communicate the details of this discussion to your colleagues, I think you'll find the more insightful will agree with me.

827587[/snapback]

Actually, giving a particular individual a 2nd (or additional) IQ test isn't the functional equivalent of giving their child an IQ test. If that was anything more than a typo on your part, you are far more confused about this issue than BJ/Raimus give you credit for.

Link to comment
Share on other sites

Actually, giving a particular individual a 2nd (or additional) IQ test isn't the functional equivalent of giving their child an IQ test.  If that was anything more than a typo on your part, you are far more confused about this issue than BJ/Raimus give you credit for.

827591[/snapback]

When I used the phrase "functional equivalence" I had the following thought process in my mind. Suppose that a child's I.Q. is determined solely by that of the parents, without respect to the underlying population group. If this were the case, then giving someone's children an I.Q. test would be just as valid a measure of parental intelligence as giving the parents themselves an I.Q. test. (Yes, there are a lot of factors that I'm ignoring here, and which my opponents will predictably but incorrectly accuse me of being ignorant of. My purpose in ignoring these other factors is to focus exclusively on whether measurement error can cause the appearance of regression toward the mean.)

 

I know reality is far more complex than the world I've described above. But in that world, children appear to regress toward the mean, and they appear to do so strictly because of measurement error. Because measurement error is also a part of real world I.Q. tests, we shouldn't ignore its potential to explain why the children of people with exceptionally high measured I.Q.s tend to have slightly lower measured I.Q.s than their parents.

Link to comment
Share on other sites

No.  Just this statistics topic.  Ever since I burst out laughing at that "regression toward the mean is error" comment, they've wanted regular updates...

 

Yeah, it's kind of sad.  But they're statisticians...they don't have much...  :(

827570[/snapback]

:flirt::wub:

Link to comment
Share on other sites

Actually, giving a particular individual a 2nd (or additional) IQ test isn't the functional equivalent of giving their child an IQ test.  If that was anything more than a typo on your part, you are far more confused about this issue than BJ/Raimus give you credit for.

827591[/snapback]

 

Well...it is, if you're measuring regression toward the mean of error. Because that's precisely what he did: he set up a simulation, established his measurable, ran it, and concluded that his measurable regressed toward the mean.

 

His problem is that his normally distributed measurable was error...so he didn't "prove" that error caused regression toward the mean, he proved that normally distributed error will regress toward the mean. :wub: Not that he'll twig to the difference...cause, effect, who cares?

 

And that's beyond the question of whether or not he set the simulation up properly...he didn't, of course. :flirt:

Link to comment
Share on other sites

I'll overlook the tone of your post, and focus on the more substansive portion. First off, I agree with your opening sentence. I indeed measured regression toward the mean as a function of error when the same people take the test multiple times. My Monte Carlo simulation shows that people with high measured I.Q.s are, on average, a little less intelligent than their scores make them appear. Do you agree so far?

 

No, not at all. You can't assume that the mean of normally distributed error, applied over multiple measurements, decreases. It doesn't, by definition...it's normally distributed. You're saying that your simulation invalidated its own initial parameters that you established. All that proves is that you didn't know what you were doing when you wrote it.

 

Now let's turn to the topic of children. Suppose two people with high measured I.Q.s decide to have kids. My simulation demonstrated that, on average, these measured I.Q.s will slightly overstate the true intelligence level of the parents.

 

Again, no...because you established as an initial fixed parameter normally distributed error. Measured IQs should over- and understate "real" IQs at the same rate. If they didn't...again, you !@#$ed up your simulation.

 

The children's I.Q.s are a function of the parents' true I.Q.s, not their measured I.Q.s. When you go to measure the children's I.Q.s (the functional equivalent of the second I.Q. test in my simulation) you'll find their measured I.Q.s are closer to the mean than the measured I.Q.s of their parents. This is because the Threshold parents (first test) group had measured I.Q.s scores that mildly overstated their true intelligence, while the Threshold children (second test) had measured I.Q. scores that, on average, stated their intelligence more or less correctly.

 

:flirt: What? Again, you're not even wrong. Ignoring the obvious bull sh-- in that paragraph...how the hell does the error in the test magically disappear in the second iteration? I can tell you why: because you don't know what you're doing. You're not measuring what you think you're measuring.

 

Assuming you're foolish enough to accurately communicate the details of this discussion to your colleagues, I think you'll find the more insightful will agree with me.

827587[/snapback]

 

Actually, they all think you've got oatmeal for brains, regardless of insightfulness. :wub:

Link to comment
Share on other sites

Well...it is, if you're measuring regression toward the mean of error.  Because that's precisely what he did: he set up a simulation, established his measurable, ran it, and concluded that his measurable regressed toward the mean.

 

His problem is that his normally distributed measurable was error...so he didn't "prove" that error caused regression toward the mean, he proved that normally distributed error will regress toward the mean.  :wub:  Not that he'll twig to the difference...cause, effect, who cares? 

 

And that's beyond the question of whether or not he set the simulation up properly...he didn't, of course.  :flirt:

827620[/snapback]

Is this your honest attempt to analyze my simulation? :(

 

Yes, my normally distrubuted variable was error, because I was measuring whether error in measurement can cause the appearance of regression toward the mean! With an error term, people who do exceptionall well on an intelligence test will tend to do worse the second time they take the test. Without an error term, that regression toward the mean disappears. In my simulation, the presence of an error term causes the appearance of regression toward the mean. It causes the regression toward the mean. It's not an effect of anything else, because the simulation was so simple there was nothing else that could possibly be causing the appearance of regression toward the mean. Nothing.

 

You didn't embarrass yourself as much as Ramius did when he came to syhuang's defense. But you certainly did embarrass yourself.

Link to comment
Share on other sites

Yes, my normally distrubuted variable was error, because I was measuring whether error in measurement can cause the appearance of regression toward the mean!

827634[/snapback]

 

No, you weren't. You were measuring how a normally distributed measurable regresses toward the mean. Not how a normally distributed parameter causes another normally distributed measurable to regress. All you did was confuse "error" with "intelligence" in your own simulation.

 

That you can't even see that is so completely unsurprising, it shouldn't be nearly as funny as it is. But it is...because you don't even understand your own simulation. :flirt:

Link to comment
Share on other sites

When I used the phrase "functional equivalence" I had the following thought process in my mind. Suppose that a child's I.Q. is determined solely by that of the parents, without respect to the underlying population group. If this were the case, then giving someone's children an I.Q. test would be just as valid a measure of parental intelligence as giving the parents themselves an I.Q. test. (Yes, there are a lot of factors that I'm ignoring here, and which my opponents will predictably but incorrectly accuse me of being ignorant of. My purpose in ignoring these other factors is to focus exclusively on whether measurement error can cause the appearance of regression toward the mean.)

 

I know reality is far more complex than the world I've described above. But in that world, children appear to regress toward the mean, and they appear to do so strictly because of measurement error. Because measurement error is also a part of real world I.Q. tests, we shouldn't ignore its potential to explain why the children of people with exceptionally high measured I.Q.s tend to have slightly lower measured I.Q.s than their parents.

827600[/snapback]

Actually, as you admit, your example is extremely simplified. But as BJ points out, you have essentially set up your example so that the result you desire is what you will get out of your experiment.

 

The model you put forth will not necessarily tell you anything about how children's IQ's are effected by their parents IQ's, nor will it tell you how much deviation in IQ from 1 to the other are due measurement of error or other sources and factors.

Link to comment
Share on other sites

×
×
  • Create New...