Completely agreed, which is why it’s promising that they’re looking for patterns rather than specific areas of activation and they are pairing up findings with treatment and using statistics to see if certain treatment modalities work better for certain broad patterns.
Is it though? Isn’t that more vulnerable to p-hacking and it’s kindred. I lack the expertise to make much of the paper, I’m just pretty disappointed with neuropsych as a field :P Data on depression treatment success are already noisy as fuck and in replication hell, classifying noisy as fuck data from fmri into broad patterns seems challenging in a repeatable fashion.
I guess we’ll find out in time if this replicates, if anyone even tries to do that.
Great thought process! Yes, fMRI imaging is very vulnerable to p-hacking, which is more or less what the dead fish paper is pointing out (even when properly calibrated, it’s a problem with how noisy the raw data is in the first place). By classifying broad patterns, however, you eliminate some of the noise that the dead fish paper is showing can be problematic by abstracting away from whether micro structures meet statistical probability for being activation and move that to the more macro. While the dead fish paper may have shown activity in specific areas, if you were then to look at activity across larger portions or the entire brain, you would detect no statistical difference with rest (or dead fish, in this case).
Furthermore, this study doesn’t stop there- it asks the question of whether these groupings tell us anything about these groups with regards to treatment. Each group is split up into subgroups based on treatment modality. These different treatments (therapy, drugs, etc.) are compared from group to group to see if any of these broad groupings by the fMRI machine make any kind of clinical sense. If the fMRI grouping was complete bogus and p-hacked, the treatment groups would show no difference between each other. This two step process ensures that bogus groups and groups which do not have any difference in clinical treatment outcomes are lost along the way via statistical rigor.
fair fair. I assume the group is probably planning to run a more interventionist study to see if the results hold when you run time forward.
It’ll be good news if it works (maybe, I do worry we’re going towards a brave new world style future where disquiet with the status quo is pathologised and medicated away. stunting criticism) but I won’t go bat for it yet.
Completely agreed, which is why it’s promising that they’re looking for patterns rather than specific areas of activation and they are pairing up findings with treatment and using statistics to see if certain treatment modalities work better for certain broad patterns.
Is it though? Isn’t that more vulnerable to p-hacking and it’s kindred. I lack the expertise to make much of the paper, I’m just pretty disappointed with neuropsych as a field :P Data on depression treatment success are already noisy as fuck and in replication hell, classifying noisy as fuck data from fmri into broad patterns seems challenging in a repeatable fashion.
I guess we’ll find out in time if this replicates, if anyone even tries to do that.
Great thought process! Yes, fMRI imaging is very vulnerable to p-hacking, which is more or less what the dead fish paper is pointing out (even when properly calibrated, it’s a problem with how noisy the raw data is in the first place). By classifying broad patterns, however, you eliminate some of the noise that the dead fish paper is showing can be problematic by abstracting away from whether micro structures meet statistical probability for being activation and move that to the more macro. While the dead fish paper may have shown activity in specific areas, if you were then to look at activity across larger portions or the entire brain, you would detect no statistical difference with rest (or dead fish, in this case).
Furthermore, this study doesn’t stop there- it asks the question of whether these groupings tell us anything about these groups with regards to treatment. Each group is split up into subgroups based on treatment modality. These different treatments (therapy, drugs, etc.) are compared from group to group to see if any of these broad groupings by the fMRI machine make any kind of clinical sense. If the fMRI grouping was complete bogus and p-hacked, the treatment groups would show no difference between each other. This two step process ensures that bogus groups and groups which do not have any difference in clinical treatment outcomes are lost along the way via statistical rigor.
fair fair. I assume the group is probably planning to run a more interventionist study to see if the results hold when you run time forward.
It’ll be good news if it works (maybe, I do worry we’re going towards a brave new world style future where disquiet with the status quo is pathologised and medicated away. stunting criticism) but I won’t go bat for it yet.