Dr. Amy Finkelstein and team have a special report in the New England Journal of Medicine. They conducted a rigorous randomized control trial of the Camden Coalition of Healthcare Providers. The Camdem Coalition has gotten a ton of press about their “hot spotting” method of identifying folks who use a lot of medical services and then wrapping these folks in a bundle of care coordination and some social services. Prior assessments had found that using a simple pre/post analysis of the individuals included in the program that this approach saved a lot of money and reduced hospitalizations. Finkelstein and her team conducted a random assignment evaluation to see what is happening without intervention and therefore what the intervention is doing. And their initial findings of looking only at re-admission to hospital is that the hot spotting approach is not doing much:
The 180-day readmission rate was 62.3% in the intervention group and 61.7% in the control group. The adjusted between-group difference was not significant (0.82 percentage points; 95% confidence interval, −5.97 to 7.61). In contrast, a comparison of the intervention-group admissions during the 6 months before and after enrollment misleadingly suggested a 38-percentage-point decline in admissions related to the intervention because the comparison did not account for the similar decline in the control group.
So what is happening?
It all depends on the shape of the assumed counterfactual. Comparing people against themselves has an implicit assumption that the crisis that prompted their eligibility is a permanent phase change and the new steady state. Any change from that steady state could then be attributed to the intervention.
Another shape of trajectory could be reasonably assumed. We could hypothesize that a spike is often just a spike and it will naturally recede. A spike will trigger the intervention but at least some if not all of the future decline in utilization/re-admissions/costs could/would have occurred even if no intervention happened.
Figuring out the right counterfactual and therefore the shape of the response without an intervention is critical to evaluation. The Finkelstein study shows that the observed response of people with a huge spike is a natural decline in utilization with or without intervention. This change in the shape of the counterfactual is critical to this evaluation that shows that this program of hot spotting on at least this metric of re-admission is not doing so hot.
Good evidence is critical. Most experiments will either return null results or return slightly significant results. That is okay. My priors have changed significantly in the past sixteen hours. Hot spotting has a very plausible and facially coherent logic model of change but the evidence this morning is saying that the plausibility of actual results is far lower now than it was before the evaluation was published. This is a big deal.