In yesterday’s thread Dog Dawg Damn made an intriguing statement on causal inference and implicitly about ethics:
Retrospective, not Randomly controlled, and not strong evidence. I’ve seen better studies show no effect on same question
I would love to see them write an Institutional Review Board application for a randomized control trial (RCT) — actually I would love to see the revised IRB application after the IRB sends the initial application back for substantial revision along with a note to review the Helsinki principles —-
16. In medical practice and in medical research, most interventions involve risks and burdens.
Medical research involving human subjects may only be conducted if the importance of the objective outweighs the risks and burdens to the research subjects.
19. Some groups and individuals are particularly vulnerable and may have an increased likelihood of being wronged or of incurring additional harm.
All vulnerable groups and individuals should receive specifically considered protection.
33. The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best proven intervention(s), except in the following circumstances:
Where no proven intervention exists, the use of placebo, or no intervention, is acceptable; or
Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention
and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm [DMA Emphasis added] as a result of not receiving the best proven intervention.
Extreme care must be taken to avoid abuse of this option.
Maybe they can write an IRB application that addresses these substantial considerations. I have a damn hard time imagining how to do that but then again my imagination is limited.
More notably, not everything can be randomized due to ethical, legal or pragmatic reasons. We have massive fields which use naturally occurring experiments that create substantial variation after an intervention occurs in some places, people or times and when there is nothing new happening. Using appropriate methods while acknowledging the assumptions needed to make causal inferences allows us to make sense of the world while establishing cause and effect.
In the past five years, I’ve used inverse probability weighing matching, instrumental variable approaches, fixed and random effect models, difference in difference, synthetic controls, synthetic difference in difference, stacked difference in difference, regression discontinuity, difference in discontinuity, and matched border pair designs to make causal estimates of policies. The methodologists have even more variants and options that I am not even hinting at. Some of those papers have been published, others are under review at the moment and more are being written. All of these methods have substantial assumptions but conditional on those assumptions being well supported, we can say that X caused Y.
We can say that things result from interventions even without a randomized experiment of kicking people off Medicaid for shits and giggles or exposing them to $0 or $1 health insurance plans or randomly assigning states or people within states to Healthcare.gov or a state based marketplace. The first is unethical as hell as we know health insurance has substantial mortality effects. The second requires an act of Congress or at least a state legislature and the last is HOW THE HELL DO WE ACTUALLY IMPLEMENT THAT ONCE WE GET AN ACT OF CONGRESS WITHOUT CONTROL/TREATMENT GROUP CONTAMINATION.
When feasible and ethical, randomization is absolutely wonderful. We should use randomized trials as much as we can as it addresses unbalanced confounding. But for a wide variety of reasons, randomization is sometimes not feasible or ethical so we should figure out how to see the world as it is with appropriate methods. And a staggered stacked difference in difference is a very good method for this context.