I had a lot of fun talking about risk adjustment yesterday. Michael Kalina is an insurance plumber who worked at a co-op and he raised a very good point.
I have long believed that legacy companies are able to use institutional knowledge to "enhance" their results, which creates asymmetries.
— Michael Kalina (@MPKalina) June 1, 2017
I agree completely with him. Deep data sets across multiple lines of business are extremely valuable if they are properly used in risk adjustment projections and risk adjustment revenue optimization.
When I left UPMC Health Plan, I was running the data mining for the Medicaid risk adjustment optimization team. We were active in most lines of business; Medicaid, Medicare Advantage, Medicare SNP, CHIP, Exchange, and employer sponsored insurance. Our optimization program required a diagnosis on a valid claim within a reference period. The data mining looked for highly probable persistent diagnoses that had shown up previously but had not yet shown up in the reference period.
The best use example is a lower leg amputation. Once a leg is gone, it is gone. My data mining looked at Medicaid claims, it looked at Medicare claims, it looked at Exchange claims, it looked at group claims. And if there was strong evidence from a 2005 group claim for a brand new to Medicaid member, a gap would be identified for the 2017 risk adjustment cycle. Similar logic played out for the Exchange and the Medicare Advantage risk adjustment teams.
New insurers would miss that risk adjustment gain. Insurers that only offered coverage in a single line of business like the individual market would miss that potential gain. Insurers with multiple lines of business but small market share in a region would miss a significant proportion of the gains from data mining.
Rich data sets are an extremely valuable asset and competitive advantage for locally significant legacy carriers.
Now what are the policy implications of this fact of life?
Good, widespread information is critical. States that have all payer claims databases should be sending identifiable diagnosis and procedure code data to all risk adjusted product lines for all carriers. This would allow new carriers and tightly focused carriers to get a significant data flow that they would otherwise not have. It would increase the probability that a high probability of being an actual gap is identified as a gap in the risk adjustment revenue chase.
Another approach would be for a state or a collection of states to keep a list of truly persistent conditions for individuals. Any condition on this list would be automatically attributed to the risk adjustment score of the insurer. A diagnosis or a prescription would not be required. For instance, a state could decide that an amputation is a truly persistent condition. If a person was identified as having a leg amputation in a previous year, there would no need for an insurer to see a Z-status code submitted in the current year to get credit. This list of truly persistent conditions would be small (amputations, organ transplant status codes, cystic fibrosis, HIV status codes etc) and it would be an additive list. This again minimizes the inherent big data advantage of long established multi-line carriers. They can still mine their data for the non-obvious but the obvious is common property.
low-tech cyclist
David, could you explain how you’re using the word ‘gap’ here? Thanks!
David Anderson
@low-tech cyclist: yes. A gap is a diagnosis that we think is valid but not showing up on a claim in the study period
Humboldtblue
You’re a weird dude, dude.
David Anderson
@Humboldtblue: yes, yes I am
Michael Kalina
So… where were the gaps sent? A medical management team?
low-tech cyclist
@David Anderson: Thanks! Your post makes a lot more sense now.
RobertB
My experience with this (Worker’s Compensation claims) is way old, but I remember the states we were dealing with were jealous of their data. It could have been our naivete, but we were trying to sell the idea of sharing the data for just such a thing, and the states were aghast at the notion of pooling their data. This was back before Big Data was a thing. We were all jackleg DBAs (although good C++ developers), and their reluctance could very well have been at the idea of letting us monkey around with their data.
WereBear
Read this and thought of you, David Anderson:
On Buzzfeed: Over a Million Christans have opted out of healthcare
Eric
Would larger companies who have the data consider this a competitive advantage and proprietary? If so, wouldn’t those who have the data, who are also likely the largest and have the most political influence, be against sharing or requiring them to share?