I need a break from updating my dissertation and battling with the formatting gods, so I want to talk about allowable and preferrable errors in the context of policy design.
Everything that is measured by humans will have some degree of error to it. When we design policies, we really want to be deliberate about what type of error we want to be concerned about. When we test for cancer, we really want to be confident that we’re not missing people who have cancer by telling them that they’re OKAY when they are not. We’re okay, at the screening stage at least, to use tests that are really good at identifying people who have cancer signs but aren’t particularly good at only picking those folks out of a crowd. Cancer screening routinely identifies a substantial number of individuals who don’t have cancer for further investigation. We make a decision that we would much rather see false positives than false negatives. We’re okay with that error. We would love a cheap test that is 100% sensitive and 100% specific but those things rarely exist.
The ACA relies on estimates of future income to determine subsidy levels and eligibility. If there is a gross error, the IRS will claw back excess subsidies in the next tax year. Under the Biden Administration, there has been a decision to welcome errors that lead to more federal spending and more people insured at higher levels of coverage. Under the Trump Administration, there was a decision to welcome errors that drove down federal spending and led to fewer people being enrolled.
Every policy that requires measurement of some sort will have some error. A key policy decision is deciding what type of error is more preferred or less preferred.



