Jump to content

Rubes

Community Member
  • Posts

    9,833
  • Joined

  • Last visited

Everything posted by Rubes

  1. Yeah, it’s been discussed throughout the thread…not precisely sure how much is needed, though…
  2. Have to admit, this is what I’m hoping for, even if that gives us too many DEs.
  3. Yeah but unless you’re moving him before this season, it makes no sense to use a first round pick on a guy who would not likely see the field much. Gotta use that pick on someone who will make a difference this season.
  4. We traded up for that guy, didn't we. The year after we drafted EJ. 🤮
  5. Maybe we'll finally get to see Sammy hit his true potential with a real QB.
  6. I agree. I think Beane (and most other GMs) are waiting to see what happens in the draft and who ends up falling to them. After that they'll have an idea just how much they want or need to pursue a vet CB as well. Big difference if a good CB falls to them at #25 vs. none seem to make sense until the third round.
  7. Yeah, thank goodness Beane didn't listen to the guys tweeting the replies to this back in 2018...
  8. So by this logic, Burrow now has 3 playoff wins and has gotten to the Super Bowl, which Josh hasn't. Doesn't that mean Burrow should be listed above Allen?
  9. Well *****, thanks for sharing it and making the rest of us look at it too!
  10. Poorly worded on my part. Non-controlled studies may produce results of undetermined meaning—you may show a statistical difference before and after the intervention, but without a control group you can't say if that result would have happened anyway, in the absence of the intervention. Controlled studies can produce results with more confidence and meaning in their statistical significance—but the statistically significant value may not have much meaning in the real world, like if the study of max running speed had a large enough sample size to detect a tiny difference of 0.01MPH. You could conclude that ACLs had a significant impact on speed, but most people would look at 0.01MPH and say, "Who cares?" The former is less trustworthy, because you can't draw a valid conclusion with much confidence. The latter is more trustworthy, because it was designed well, even though the more valid conclusion may not ultimately mean much.
  11. Sham surgeries are most definitely performed, on animals for animal studies, which of course are relevant for our understanding of similar scientific questions in humans. The sham surgeries are done because of the reasons I stated. Of course, not all queries lend themselves to a randomized trial. You can't do a prospective randomized trial of ACL injuries, for instance. Studies like that are best done as controlled observational studies the way I described. You can certainly do an uncontrolled before-after study, and lots of people publish those, but by no means are those studies considered to be high quality evidence. The main criticism of an uncontrolled before-after study is that the results are untrustworthy—you have no idea if the observed effects are truly significant or not. In many cases it's very difficult to identify a control group for a before-after study, and that's okay, you can't always have what you want. But by accepting that and publishing an uncontrolled before-after study, you're basically admitting that your results, while interesting, may or may not have real-world significance. You choose a study design based on the question you're trying to answer. If the question is: what is the impact of an ACL injury on an NFL player's career? then you know you'll be doing an observational study, but the real question you're trying to answer is: how does what happened to those injured players compare to what would have happened if they had never been injured? Since you can't do that directly, you do the next best thing—compare what happened to those injured players to what happened to a similar group of non-injured players. Imagine if the main outcome you were trying to test is a player's maximum running speed. So the main question is: what is the impact of an ACL injury and repair on a player's maximum running speed? Let's say you have all of the data on player's maximum speeds from the NFL combine, and now you identify players who had ACL injuries during their NFL career, so you test them again for their max running speed. You could just do a simple before-after study and compare their running speeds now vs. their running speeds then, and you'd probably see a decent difference. You could, for instance, say that those with an ACL injury saw an average loss of 1MPH in their max running speed. Is that a valid conclusion? Not really. Of course, the reason is that everyone slows down as they age, so what you'd really want to do is compare the average speed loss in those who are injured with the average speed loss in those who were never injured. Then you'd know the impact of the ACL injury on max running speed. You need the control group to know how significant the loss of speed observed is. And yes, the null hypothesis would be that the ACL injury had no effect on max running speed. And that may very well be the case. Or, there may be an effect, but it's not statistically significant. Or, the loss may be statistically significant, but it may not be significant in the real world. For instance, you could show a statistically significant loss of 0.1MPH to their speed, but how that impacts play in real games may not be a big deal. But you still need the control group to understand whether the observed differences pre- and post-injury are statistically above and beyond what you would have seen, on average, in the absence of the injury.
  12. Perhaps, but it more likely has to do with baseline performance level. Tre at baseline is a Pro Bowl-caliber player, one of the top at his position. If his performance declines a certain amount, say 10-15%, as a result of his injury and surgery, he'd still be better than most CBs out there and would have a job. By some measures of performance, however, he would be worse off than pre-injury, so it's not entirely about being in the league or not.
  13. Not exactly. Using a person as their own control is an appropriate design if you think all of the external factors that could impact the outcome are the same before the exposure (injury) vs. after. If you're just measuring, for instance, speed or leg strength or something like that, then that's a reasonable thing to do. I think the point that many people are making here is that this is not the case—when players are injured and are lost for a year (more or less), there are other factors that can impact the outcomes of interest here: the number of starts a player has, the number of snaps they play, etc. Being injured and missing a lot or all of a season can result in other players taking over starting roles, teams deciding to move on to cheaper players, and so on. It may depend on age, on whether they were an entrenched starter or a backup, whether new draft picks have come along, and so on. The purpose of including a control or comparator group is to make sure that the observations seen—a change in starts, a change in snaps, or other change in performance—is due specifically to the exposure (injury). If you do a study as you say to measure performance at the same task after an intervention, you can't really say for sure that the intervention is the cause of any changes seen (eg, differences could be due to various things that change over time). That's why you include a control group made up of similar people with the same features measured at the same time, with presumably the only difference being the absence of the intervention. Then you do things like measure the average change in the intervention group and compare that to the average change in the control group. The difference, presumably, is due to the main difference between the groups—the intervention. Same thing with the benzo study. In order to know that the changes in memory observed were due to the benzos (and not to a placebo effect), they would need a separate but otherwise similar control group who is given an injection of a placebo. Then compare the average changes seen in the benzo group vs. the control group, with the difference (if any) thus attributable to the benzos. It's true for surgical trials, too—in some studies looking at the effect of a surgical intervention, they are often compared to a control group given a "sham" surgical procedure, since just the act of undergoing surgery could result in changes observed in the outcomes. But the latter is addressing prospective randomized trial studies, the gold standard for evidence. What these guys did here is a retrospective observational study. In order to design an observational study to be as similar to a prospective randomized trial as possible, you do the work to choose a historical control group that is as similar to the intervention group as possible, and otherwise has (presumably) the same distribution of "unmeasured" variables. It's the analogy of randomizing in a controlled trial, the purpose of which is to try and ensure that the two comparison groups are identical other than the intervention.
  14. Not necessarily. The idea is to match cases (injured) and controls (non-injured) on the features or variables you think could have an impact on the outcome (years in the league, starts, snaps, etc). If you do that, then there's no reason to believe the controls are more likely to be "guys whose careers were limited by not being good enough"—that variable should be equally distributed amongst cases and controls. If it's not, then the comparison is not valid, but as researchers you need to do what you can to make sure the cases and controls are as similar as possible. Yes, nothing happened to the control group (ie, no injury)—that's the point. What is the impact of the injury on a player's performance? We can't know what their performance would have been without the injury, so we try to identify players as similar to the injured player as possible, but without the injury. If the study showed that both groups were out of the league after the same period of time, and you did a good job matching the cases with similar controls, then yes, you could conclude that the injury had little effect on player longevity. This I agree with. The idea of the article is to quantify how much of an impact the injury has.
  15. Yeah, that last article is a decent example of what I mean—it's a matched case-control study where they looked at players who participated in the NFL Combine and who had a prior ACLR to examine the outcomes of draft position, games started, games played, and snap percentage. Each participant with an ACLR was matched with control players—players who participated in the NFL Combine who had no knee injuries or surgeries prior to the Combine, matched by position. In this case they matched only by position, but it at least allowed them to "eliminate variability among the different positions in terms of the unique stresses on the ACLR for each position." They were also essentially matched by age by virtue of the timing of the start point (participation in the NFL Combine). They could have matched on other characteristics, but this was probably the most important matching variable. The study of ACL injuries during an NFL career gets a little more complicated given the timing of the injury and the more detailed pre-injury history needed to match against.
  16. Ultimately, the best comparison would be between what happened to the player with the injury, and what would have happened to the player had he never had the injury (ie, the counterfactual). But since that's impossible, you try to do the next best thing: compare what happened to the player with the injury vs. what happened to other, similar players beginning at the same time point in their careers. If you can make the comparison group as similar to the injured players as possible, then the results are more valid. So what you should look for are other players in the league who are similar in age and size, play the same position, and have a similar history (college career, time in the NFL, snaps at the NFL level, etc) but who did not have an ACL injury up to the same time point in their careers as the matched injured player. So you would do this matching for every injured NFL player using one or more of these controls, and then compare their subsequent performance and longevity (accounting for the time out due to the injury). Comparing the injured player before vs. after their injury is not an ideal comparison because so many other non-injury-related factors can impact the outcomes being measured—eg, how much longevity and playing time they experience.
  17. Ferguson with the 5 INTs and still pulled it out in the end. What a game that was.
  18. Man I am so pumped for next season. If BBB uses the space to pick up a decent FA CB, I'm going to be ready to put my head through a wall. LFG!
  19. Any product that starts with B so they can use the BBB acronym with its double-entendre.
  20. Yeah, I mentioned that a few posts above. The problem is that they chose the wrong comparator group—they compared the injured players with themselves pre-injury, and also injured players who returned to play vs. those who did not return to play. Both are incorrect comparisons. They should be comparing injured players to age-, NFL experience-, and position-matched players who did not suffer an ACL injury at that point in their careers.
  21. Don't be so sure...most people in medicine are not trained to do research, and a good deal of research that is done is not done well. It's the job of the journal reviewers to spot problems with study design or analysis, and that process is fraught with issues—not the least of which is that reviewers aren't paid to do it, so you never know how carefully they do their job. Journals also gain from articles that draw attention, even if they are not done well, so there is a motivation in some cases to publish rather than reject. It's hard to tease apart. Still, these authors are from Drexel and Duke, two pretty strong research universities, so it's a little surprising. The problem here is that they didn't use the right comparator group. They were comparing individuals pre- and post-injury, and comparing those who were injured and returned to play vs. those who were injured and never returned to play. The right comparator, as I mentioned earlier, would be those who were injured vs. those who weren't injured, matched based on their age, years of NFL experience, position, and maybe other factors like round they were drafted, whether they were a starter or backup at the time of injury, and so on.
  22. Yep, I’m thinking they should have done a matched cohort study where they matched each player who suffered an ACL injury with players of the same age, years in the NFL, and position at the time of injury. Perhaps also include whether they were a starter or backup. Then you’d at least get a better sense of the impact of the injury on career metrics like starts, total plays, and so on as compared with similar players who didn’t suffer that kind of injury.
×
×
  • Create New...