Jump to content

Wraith

Community Member
  • Posts

    743
  • Joined

  • Last visited

Everything posted by Wraith

  1. Isn't that what makes debate fun? You always know the other guy is coming back for more...
  2. You’re right, my initial response was only addressing whether an IQ test will show variation. Taken in the larger context of this debate, I can definitely see how it could confuse the issue. I did not do a very good job of making a distinction between measurement variation and population variation. I guess what I should have said was that in every test there are two broadly defined sources of variation: A) The thing BEING tested B) The thing DOING the testing A very basic definition of a “capable” measurement system is that the variation from measurement system (source B above) is sufficiently small that variation from the process (source A above) can be seen. In other words, the noise does not overwhelm the signal. There is an extreme opposite case where an extremely incapable measurement system can demonstrate zero variation but be totally inaccurate. This is often the case when the measurement system or test lacks sufficient resolution (called gage resolution). In that case, the test will probably show no variation but may show bias (the error PDF standard deviation = 0 and mean != 0). This is what I was getting at with my yardstick and micrometer example. In a lot of cases, the two sources of variation are very distinct. But in my example above, regarding the IQ test, the difference between process variation and measurement variation is not well defined. Is the amount of adrenaline in your blood stream a controllable factor (in which case, any variation it causes in the test score would be measurement variation) or not (in which case, any variation it causes would be process variation). This becomes even more ambiguous when you consider that adrenaline levels can be PARTIALLY regulated (removing any outside stimulus from the testing environment) but you cannot keep the test subject from daydreaming…. Of course, I know next to nothing about IQ tests, so this particular example may be totally useless. I have no idea if adrenaline affects how well you perform on an IQ test. But the points remain that: everything shows variation; just because a test exhibits no variation does not make it an adequate test; there are two broadly defined categories of variation in a test, and occasionally it gets difficult trying to figure out which category various factors belong to. It’s an interesting subject. One I wrestle with a occasionally in my day-to-day work.
  3. Where the heck were you hanging out? I lived in Buffalo for four years and never heard racial slurs with any kind of regularity. There was the occasional idiot, of course, but no more than anywhere else I've ever lived. As for anti-semintism, that is an interesting perspective given that Buffalo has a pretty significant Jewish population, at least in some areas.
  4. Let's stop calling it "luck" and start calling it "variation." If you've got a test or measurement system that shows no variation, you have either an incredibly good test, or an incredibly bad one. It either means the test procedure has minimized the variation to such a minimal level that the measurement system is incapable of discerning the variation, or the measurement system is so bad that it is incapable of discerning the variation when it is not at a minimal level. These are really the same thing, because we get to define how minimal is minimal enough. I could measure the length of something that is nominally supposed to be 36 inches long with a set of micrometers and with a yard stick. If the lengths are varying on the scale of .001 of inch, the micrometer will probably show variation and the yard stick certainly won't. Which measurement system is more capable? Disregard the fact that you'd need a very strange set of mics to measure something 36" long with.... Everything in the real world shows variation. That includes human intelligence. There are plenty of outside factors that affect your ability to think, such as the amount of rest you've had, caffeine in the blood stream, distracting factors, etc. How you think, solve problems, and answer questions, varies to some extent over time. If a test of human intelligence shows no variation, it simply means it is not capable of detecting the variation. Sometimes, that's acceptable, sometimes it's not.
  5. Yup, that's pretty much it. Holcomb's Arm was mislead a bit by that article. The article describes a scenario where exceptional scores move towards the popluation mean when retested, and calls it regression towards the mean. HA simulates that scenario, see's that the scenario only works because of measurement error, and concludes that measurement error causes regression towards the population mean. I can see how it could happen.
  6. I am an engineer first and foremost, so I like real world examples. So I am going to give you an extremely simple example of what statisticians traditionally refer to as "regression towards the mean" that I've worked with often: Weight of a plastic part is a very good metric for a whole bunch of process parameters in injection molding. If the weights of a series of plastic parts are consistent and stable over time, you know that the injection unit of an injection molding machine is also behaving in a controlled and stable manner. You also know that the part dimensions are likely to be behaving in a controlled, stable manner. When dealing with very small parts that require very small and precise shot sizes from a molding machine, part weight may have to be measured with a large degree of precision. When you start measuring part weight to tenths of thousands of grams, outside factors such as ambient temperature and humidity of the room air have huge effects, even in a controlled environment. These uncontrollable factors result in measurement error. It is typically normally distributed and centered at zero. So parts are weighed repeatedly over time . If only measured once, you have no idea if the weight you've gotten is the true weight, near the true value, or an extreme value that had a .1 % chance of happening. Repeated measurements dampen out these effects. This is because measurement error is normally distributed and will regress towards the mean (of the ERROR, not the population, this says nothing about the weights of any other part). Repeated measurements remove the effects of measurement error and cause regression towards the "true" mean. There is a large body of knowledge regarding how to determine how big a sample size (in this case, repeated measurements) is necessary to get an accurate reflection of the true mean (and the answer is always "how accurate do you need to be?"). EDIT: Note, repeated measurements in this example are not done because you think parts are changing over time. Just want to make that clear.
  7. I would absolutely say he has mislabeled regression toward the mean, or at least over simplified. Without knowing a person's true intelligence and the error distribution of the test, it is impossible to say what a subsequent test score of A SINGLE, SPECIFIC PERSON will be. If the example person's true IQ is 790 and he scores a 750, the probability that a subsequent score is going to be even lower than 750 is incredibly small (assuming the IQ test has only a reasonable amount of error) and a likely score of 725 is a ludicrously low suggestion. In other words, not very likely. The author should be referring to a sample of people who scored 750 (as your scenario does). This example is confounding two behaviors that Bungee Jumper has brought up: the probability distribution of the underlying population, and regression of the error towards the mean ERROR (typically zero) with subsequent retests.
  8. Understood. I am just worried about my credibility with people just jumping into this argument for the first time. Not that I have much credibility.
  9. Well, I would definitely urge him to call the behavior in his scenario something besides regression towards the mean, definitely. But is it "a bulletin board crime" to continue to CALL it "regression towards the mean" (when in some literal sense, it is regression towards a mean...)? No, it's just being stubborn. I would caution HA if he is trying to say his scenario is simulating the traditional definition of "regression towards the mean." But I don't think he is saying that, nor do I think he believes that now (nor am I sure he ever believed that).
  10. That's not necessary. The first mistake HA made was calling the behavior seen in his scenario "regression towards the mean." That phrase has a very specific meaning to statisticians, and the behavior exhibited by his scenario does not fit that meaning. You'll notice, that is why I always refer to it as "the specific scenario" and not "regression towards the mean." However, in HA's defense, he has never claimed to be a trained or professional statistician, so it is perfectly reasonable to assume he would not know that the phrase "regression towards the mean" has such a specific meaning/application in traditional statistics. Furthermore, because his scenario demonstrates how the presence of normally distributed measurement error contributes to the movement of sample means towards population means after retest, I don't think it is much of a crime for an amateur statistician with a basic stats education to call it "regression towards the mean." The other mistakes HA has made involve being stubborn and arrogant, but that is definitely not in short supply amongst his detractors, either. Hell, I am as stubborn and arrogant as they come.
  11. Hey now, don't just throw my name around like that. I defended some very specific areas of HA's argument in a previous thread. They were worthy of being defended. That does not mean I support everything he says on the matter of statistics. I have also told him at various times that some of his statements were wrong. You yourself referenced those occasions earlier in this thread. That does not mean I disagree with every he says. So don't try to imply I am backing up every one of his claims and don't try to imply that this statement means I think he's always wrong or stupid.
  12. I had to go back and refresh my memory regarding what exactly the behavior is HA was referring to (regardless of if he called it regression toward the mean or whatever). I haven't been following the on-going debate at all, so imagine my surprise at seeing my name thrown around in multiple threads the last few days. Anyway, the original premise in HA's scenario, was this (In my own words, as I understand it): - DISCLAIMER: I pay absolutely no attention to IQ scores/tests, so if I use unreasonable IQ test results, error, etc., don't fault me for it, they are hypothetical. - Take a sufficiently large, RANDOM, sample of the population. (I presume human intelligence is normally distributed). - Have them all take a test that measures human intelligence. - The assumed result would be a normal distribution with some mean and standard deviation. - Assume that the test has normally distributed error centered at zero and with some (non-zero) standard deviation. - Take a slice at one segment of the sample that is not located at the mean (offset from the mean). THIS IS IMPORTANT. - Have that segment retake the test. - The mean of that segment's retest scores will tend to be closer to the mean of the population than the original mean of that segment. This actually does happen. Without the normally distributed measurement error, the mean of the segment's retest score would not change after retest. I have defended HA against people who have said that: - This specific behavior does not happen - This specific behavior does not need measurement error (with a probability distribution) for it to occur. I agree with you (Bungee Jumper) when I say: - The underlying cause of the behavior is a probability distribution. - The premise of this scenario is not really proving much. - This is not "regression towards the mean" as statisticians would define it. - The methodology is suspect (In HA's defense, I have forgotten much of the explanation and have not been following the on-going debate, so I am up to date on his methodology). No where do I say that error is causing the distribution of human intelligence to occur instead of a point value at the mean. It is an underlying assumption in HA's scenario that human intelligence has a PDF and in fact, for my explanation to be valid, a distribution of human intelligence has to be present.
  13. I'm not referring to the general case of regression towards the mean when I say that non-zero error has to be present in addition to a probability distribution. I am referring to the specific scenario laid out by Holcomb's Arm regarding IQ scores appearing to regress towards the POPULATION mean (as opposed to the individual SAMPLE mean) in the presence of non-zero error. QUOTE(Wraith @ Nov 10 2006, 03:28 PM) I may have missed it when he explicitely stated that measurement error is causing the regression towards the mean, but that doesn't seem to be what the argument is about here at all anymore. I do not think anyone would argue that the phenomenon HA is describing does not happen. It seems to be that you are arguing about what the cause of that phenomenon is. Fair enough: Without measurement error, this phenomenon could not occur. That is because without measurement error, there would be no deviation from the true results. So if HA is saying that measurement error is needed for this phenomenon to occur, he would in fact be correct. However, while measurement error is necessary (because it causes the necessary deviation) the regression towards mean is really happening because the sample population (the range of "true" values) and the error are normally distributed, which is what Bungee Jumper is arguing. This is also true. I have not seen HA say that the normal distribution is NOT causing the regression. His example he just laid out in a response to me shows he understands how the normal distribution is causing the phenomenon. So are we really just arguing over semantics? EDIT: I liken it to someone saying that stretching a rubber band is causing it to snap back to it's original form. Yes, the displacement needs to occur for the snap back to occur, but the snap back is actually occuring because of the elasticity of the rubber band. Both are necessary. It seems to me to be, at least right now, an argument of semantics.
  14. He did NOT say that. I pointed out in that thread that the SPECIFIC behavior seen in a scenario laid out by HA in the previous thread DOES happen, and ONLY happens when non-zero error is present. I then said that the debate over whether error was leading to that SPECIFIC behavior is very much a semantic debate. The behavior is CAUSED by the laws of chance, but needs error to be present for it to occur. I likened it to stretching a rubber band and the snap back to original form. The snap back is CAUSED by the elasticity of the rubber, NOT the stretch, but the stretch needs to be present for the snap to happen.
  15. True, it was supposedly named after McKeller, but I really don't remember Mr. McKeller being a "dominant TE." In fact, I don't remember McKeller doing much of anything during the Super Bowl years....
  16. ...and J.P. Losman is very low on the list of reason's why. In fact, I would venture to say he was our best player on offense and it really isn't even close. That isn't saying ALL that much in this particular case. But then again, he did step up the most in a game that we lost by 3 points against one of the best teams in the league, and put in those terms, it is a pretty good game. The passing game (directed, or course, by Losman) was practically our entire offense. What little running game there was, was supplied by Losman. In case you didn't notice, he was our best runner today (26 yards, tying McGahee for the most yards, on a heck a lot fewer attempts). His three turnovers led to exactly three Charger points. He threw for 14 points. The defense gave up 200 yards rushing. The Bills played one hell of a good team. The officials also did their part, too. Losman did not play all that well, but he certainly did his part to get a W. The defense also didn't play all that well, but did their part to earn a win, also. Willis McGahee and Anthony Thomas did not do their part to earn a win. Some incredibly bad and poorly timed calls by the officials and a non-existant running game were the #1a and #1b reasons the Bills lost.
  17. There are a couple points about this "judgement call" that were infuriatingly idiotic: 1) What official in their right mind thinks a QB being grabbed by his jersey is "in the grasp." These are people who are paid to watch football games every week. Over half their job is to watch quarterbacks. I mean, damn, he was grabbed for less than a second. 2) The whole point of "in the grasp" is to stop a play when the quarterback is being held and is defenseless. So how the hell can you retroactively call "in the grasp" after the play was over? If you truly thought he was in the grasp, blow the !@#$ing whistle and protect the quarterback. If Losman had truly been "in the grasp" on that play, he would've gotten killed, because the officials clearly did not blow that play dead. Totally, totally, ludicrous.
  18. What the hell, did you type this post 5 weeks ago and just get around to hitting the submit button? I haven't seen many people, if any, being very critical of Losman, or the offensive line for that matter. For once, people are being reasonable.
  19. Wow, the defense is playing inspired all of a sudden. Very nice.
  20. That's three big 3rd Down receptions for Josh!
  21. Wow, Rivers and Losman look really similar today. Too bad McGahee doesn't look like Tomlinson at all.
×
×
  • Create New...