Essentially, the paragraph states that for suicide, those who did not receive gender affirming care saw a 3x higher suicide rate than controls - and this is with overcontrolling for psychological treatment visits. Those who did receive care had no significant difference in suicide rates from controls. Dr. Meyerowitz-Katz, epidemiologist, stated of these findings, “The authors in their discussion focus on the fact that this difference was not statistically significant (presumably the p-value was 0.051-0.054), but that's not a useful distinction. There's a lot of uncertainty here, but the increased risk is still remarkable!”
Notably, this is the only section where the researchers withhold the model that doesn't include visits to psychological specialists. It's likely that the correlation between receiving gender-affirming care and a decreased suicide risk would be even more pronounced in a model free from the issue of overcontrolling. If the researchers had presented such a finding, it would fundamentally challenge the basis of their paper... that gender-affirming care indeed saves lives. Even in attempts to dilute this relationship with confounding variables, the signal around gender affirming care remains strong!
Main problem with the student explained in that quote. The overcorrection problem refers to the fact that suicide correlates with psychiatric visits because suicidal people are more likely to seek help and therefore would be like saying visit to cancer doctors causes death from cancer.
Additionally, the median age for referral to the gender clinic was 19, so the results can't really be applied to minors.
Sure, but alpha is an arbitrary choice. An p-value of .051 isn't magically different than .049. They're essentially equally statistically significant. The reported p-value was 0.05. Smaller p-values are better, but its a continuum, not a series of buckets. Also, if you are testing if care is better than no-care, a 1-tailed test should be done, so the p-value would be 0.025.
But the bigger problem is they controlled indirectly for the outcome, so any difference is being minimized. And given they conveniently only leave out the valid model for specifically this section, it almost seems like the "mistake" was intentional. At least one author as a history of public transphobia and clearly had their conclusion before any analysis started. There's lies, damn lies, and then there's statistics. If you have enough data variables, you can find specific things where you simply do not have enough data to get your p-value small enough. Given all people in the study were at least referred to a gender clinic means they're in a good enough place where that's something they feel they can do any parents aren't able to interfere, its going to be a group with an already low suicide rate compared to the overall transgender population - they only have 7 data points divided between both GR groups. It would have to be a massive difference for any effect to be statistically significant and they still had to manipulate the data to bring the p-value up to 0.05.
Or I guess one could argue that death by suicide is caused by a high number of psychiatric treatment contacts, as the study's authors seem to be implying. Not saying you are arguing that. Just pointing out how ridiculous of a statement the study makes. Technically it just says correlation, but if you want to assume they're not just transphobic or totally incompetent, then that's the interpretation that would make most sense.
Perhaps researcher bias is an issue, I don't know. Statistics are frequently abused and manipulated, but also frequently disregarded when the data "feels" significant - even in academic papers! It's been a long time since I formally studied statistics, but more recently I've been shocked by how casually they are glossed over in higher education. Consult a statistician. .. has anybody ever done this?
It's fraught really, evidence-based medicine is our best tool but when the subject is so emotionally (and increasingly politically) charged, is there anybody researching this who doesn't have a bias? I genuinely doubt it. In fact... my hypothesis is that there are no unbiased researchers on this. Which would possibly be the null hypothesis.
Consult a statistician. … has anybody ever done this?
Definitely not in academia. Agreed that academic papers are regularly published by people who know nothing about statistics, but threw some numbers from 3 trails into some package told them without any understanding of what they're doing other than "p-value below certain thresholds, so I put *, **, or *** in a column". And I doubt journals make sure to get someone with a statistics background to double check things for the basic sciences.
I'd just expect better in the medical field where statistics are much more essential and the results are much more applied in a way that directly has a large impact on the QoL of people.
Agreed there's no unbiased researchers. But you can be biased and still not make obvious exclusions to fit your story. And if they want their story to be "psychiatric care causes suicide," they should bury their central claim deep in the paper. It should be explicit about that claim in the title or at least the abstract.
Main problem with the student explained in that quote. The overcorrection problem refers to the fact that suicide correlates with psychiatric visits because suicidal people are more likely to seek help and therefore would be like saying visit to cancer doctors causes death from cancer.
Additionally, the median age for referral to the gender clinic was 19, so the results can't really be applied to minors.
Statistical significance is important, though.
Sure, but alpha is an arbitrary choice. An p-value of .051 isn't magically different than .049. They're essentially equally statistically significant. The reported p-value was 0.05. Smaller p-values are better, but its a continuum, not a series of buckets. Also, if you are testing if care is better than no-care, a 1-tailed test should be done, so the p-value would be 0.025.
But the bigger problem is they controlled indirectly for the outcome, so any difference is being minimized. And given they conveniently only leave out the valid model for specifically this section, it almost seems like the "mistake" was intentional. At least one author as a history of public transphobia and clearly had their conclusion before any analysis started. There's lies, damn lies, and then there's statistics. If you have enough data variables, you can find specific things where you simply do not have enough data to get your p-value small enough. Given all people in the study were at least referred to a gender clinic means they're in a good enough place where that's something they feel they can do any parents aren't able to interfere, its going to be a group with an already low suicide rate compared to the overall transgender population - they only have 7 data points divided between both GR groups. It would have to be a massive difference for any effect to be statistically significant and they still had to manipulate the data to bring the p-value up to 0.05.
Or I guess one could argue that death by suicide is caused by a high number of psychiatric treatment contacts, as the study's authors seem to be implying. Not saying you are arguing that. Just pointing out how ridiculous of a statement the study makes. Technically it just says correlation, but if you want to assume they're not just transphobic or totally incompetent, then that's the interpretation that would make most sense.
Perhaps researcher bias is an issue, I don't know. Statistics are frequently abused and manipulated, but also frequently disregarded when the data "feels" significant - even in academic papers! It's been a long time since I formally studied statistics, but more recently I've been shocked by how casually they are glossed over in higher education. Consult a statistician. .. has anybody ever done this?
It's fraught really, evidence-based medicine is our best tool but when the subject is so emotionally (and increasingly politically) charged, is there anybody researching this who doesn't have a bias? I genuinely doubt it. In fact... my hypothesis is that there are no unbiased researchers on this. Which would possibly be the null hypothesis.
Definitely not in academia. Agreed that academic papers are regularly published by people who know nothing about statistics, but threw some numbers from 3 trails into some package told them without any understanding of what they're doing other than "p-value below certain thresholds, so I put *, **, or *** in a column". And I doubt journals make sure to get someone with a statistics background to double check things for the basic sciences.
I'd just expect better in the medical field where statistics are much more essential and the results are much more applied in a way that directly has a large impact on the QoL of people.
Agreed there's no unbiased researchers. But you can be biased and still not make obvious exclusions to fit your story. And if they want their story to be "psychiatric care causes suicide," they should bury their central claim deep in the paper. It should be explicit about that claim in the title or at least the abstract.