Archive

No, Really, It's Possible That Health Insurance May Not Make Us Healthier

Countdown to Obamacare

More on that bombshell study out of Oregon

It's time for another post on last week's study out of Oregon, which showed--much to the surprise of practically everybody--that putting people into Medicaid didn't seem to significantly improve their physical health.

I know it's a lovely spring day and you're looking longingly at the happy hour specials, or that riding lawnmower you'd like to test drive one more time. But stay with me, because this debate is literally the most important thing to happen to health care policy since . . . well, for a long time, anyway. If the Oregon results hold, they will radically change the way that we think about health care policy: what it's for, what it can do, and how it should be constructed.

And contrary to what you might have heard, this is not some weird, outlying result that doesn't mean anything. There are other studies which have found surprisingly small effects of giving people insurance. I commend to you Levy and Meltzer's very thorough literature review from 2008, in which they point out just how mixed the data are:

How does health insurance affect health? After reviewing the evidence on this question, we reach three conclusions. First, many of the studies claiming to show a causal effect of health insurance on health do not do so convincingly because the observed correlation between insurance and good health may be driven by other, unobservable fac- tors. Second, convincing evidence demonstrates that health insur- ance can improve health measures of some population subgroups, some of which, although not all, are the same subgroups that would be the likely targets of coverage expansion policies. Third, for pol- icy purposes we need to know whether the results of these studies generalize. Solid answers to the multitude of important questions about how specific health insurance policy options may affect health seem likely to be forthcoming only with investment of substantial resources in social experiments.

Here is their table of the studies they looked at:

This was followed by Kronick in 2009, the largest observational study ever done of insurance and mortality. His conclusion:

Adjusted for demographic, health status, and health behavior characteristics, the risk of subsequent mortality is no different for uninsured respondents than for those covered by employer-sponsored group insurance at baseline (hazard ratio 1.03, 95 percent confidence interval [CI], 0.95–1.12). Omitting health status as a control variable increases the estimated hazard ratio to 1.10 (95 percent CI, 1.03–1.19). Also omitting smoking status and body mass index increases the hazard ratio to 1.20 (95 percent CI, 1.15–1.24). The estimated association between lack of insurance and mortality is not larger among disadvantaged subgroups; when the analysis is restricted to amenable causes of death; when the follow-up period is shortened (to increase the likelihood of comparing the continuously insured and continuously uninsured); and does not change after people turn 65 and gain Medicare coverage.

It isn't that there is no evidence that insurance makes people healthier; some studies do show just that. A recent study of Medicaid expansion in three states, for example, strongly suggested health benefits. But the evidence is mixed--too mixed, I'd say, to support the extremely strong beliefs about the health benefits of insurance evinced by most of the "Reality Based Community". In fact, I might even argue that the larger the study, and the better the controls, the less likely it is to show a strong health benefit from insurance.

Now, I find it intuitively very hard to believe that there is actually zero effect from giving people insurance. But the weight of this evidence suggests to me that this effect must be small. If the connection really were that strong and obvious, we wouldn't have so many good studies suggesting little-to-no health benefit.

Which leads me to a couple of responses to the Oregon study I wanted to address directly. The first comes from Brian Beutler:

Imagine a year-long study of 2000 uninsured people,1000 of which were allowed to enroll in Medicaid, the other 1000 of which were required to remain uninsured. After a year, the data indicated that in the aggregate Medicaid provided the first 1000 significant economic security and measurable mental health benefits, but showed negligible (or more likely inconclusive) effects on heart health.

Not evident from the aggregate data, though, is that mid-way through the study, one male subject from each group began experiencing chest pains. After a few days, the man with Medicaid went to the hospital, had an abnormal EKG and an emergency angiogram, which revealed a major blockage and required immediate angioplasty. He survived. The man without Medicaid, by contrast, did nothing, until he suffered a massive MI, and died in an ambulance on the way to the hospital.

In other words, it’s possible that being uninsured cost one of my made up subjects his life, even though the made up study didn’t find significant overall improvement in measures of cardiac health. Likewise in the real world, the Oregon study was not designed to address the excess deaths issue, just like studies on insurance’s impact on mortality aren’t designed to test its impact on various health measures across the population.

But of course, most real-world studies link tens of thousands of deaths a year to uninsurance. That’s a very small percentage of the millions of uninsured in the United States. But I doubt even Medicaid’s loudest critics would shrug off 10,000 or 20,000 preventable deaths a year in most other contexts.

So instead they put their heads in the sand. Douthat more or less treats the Oregon study as a de facto refutation of that entire, separate area of research.

As we've already seen, "most real-world studies" don't show any such thing. Nor is it exactly true that the Oregon study was not designed to pick up these sorts of "excess deaths".

Let's consider, for example, the question of uninsured people who don't go to the Emergency Room when they have chest pains. Emergency room usage was actually examined in the study. They found no statistically significant difference in ER visits or hospital admissions between the treatment group and the controls. Nor is this entirely surprising. The kind of heart attack that kills you tends to hurt really badly--the pressure has been compared to having an elephant sit on your chest. People are broadly aware that really bad chest pain means go to the ER right now, and apparently, that's what they do.

This is wrong in another way: the researchers did, in fact, look at mortality. Those results were covered in the first round of results, released in 2011. They didn't find any statistically significant difference in mortality between the treatment group and the control group.

Of course, you can argue that the study was simply too small to pick up mortality differences. This is essentially the argument that Kevin Drum makes:

The first thing the researchers should have done, before the study was even conducted, was estimate what a clinically significant result would be. For example, based on past experience, they might have decided that if access to Medicaid produced a 20 percent reduction in the share of the population with elevated levels of glycated hemoglobin (a common marker for diabetes), that would be a pretty successful intervention.

Then the researchers would move on to step two: suppose they found the clinically significant reduction they were hoping for? Is their study designed in such a way that a clinically significant result would also be statistically significant? Obviously it should be.

Let's do the math. In the Oregon study, 5.1 percent of the people in the control group had elevated GH levels. Now let's take a look at the treatment group. It started out with about 6,000 people who were offered Medicaid. Of that, 1,500 actually signed up. If you figure that 5.1 percent of them started out with elevated GH levels, that's about 80 people. A 20 percent reduction would be 16 people.

So here's the question: if the researchers ended up finding the result they hoped for (i.e., a reduction of 16 people with elevated GH levels), is there any chance that this result would be statistically significant? I can't say for sure without access to more data, but the answer is almost certainly no. It's just too small a number. Ditto for the other markers they looked at. In other words, even if they got the results they were hoping for, they were almost foreordained not to be statistically significant. And if they're not statistically significant, that means the headline result is "no effect."

The problem is that, for all practical purposes, the game was rigged ahead of time to produce this result. That's not the fault of the researchers. They were working with the Oregon Medicaid lottery, and they couldn't change the size of the sample group. What they had was 1,500 people, of whom about 5.1 percent started with elevated GH levels. There was no way to change that.

Given that, they probably shouldn't even have reported results. They should have simply reported that their test design was too underpowered to demonstrate statistically significant results under any plausible conditions. But they didn't do that. Instead, they reported their point estimates with some really big confidence intervals and left it at that, opening up a Pandora's Box of bad interpretations in the press.

I don't think that this critique will bear the weight that Obamacare supporters are claiming, for several reasons.

First of all, when you read through the supplemental material, it's clear that the Oregon researchers thought quite carefully about power. They prespecified sub-groups, like the near-elderly, who might be expected to show bigger changes, precisely because they were very attentive to this issue. Those sub-groups also didn't show a significant impact.

(Incidentally, take a moment out for a round of applause for good research design. In studies like this, the temptation is overwhelming to go back and mine the data for an effect. The folks at the study tied their own hands to keep themselves from getting more publishable results. That is damned smart, and also damned brave. And all too rare.)

Second, the rates of high blood pressure, etc were not that low. Kevin singles out elevated glycated hemoglobin levels (a proxy for blood sugar), pointing out that if only 5% of the study had such elevated levels, we shouldn't have expected to find a big improvement.

And yet, we did find a significant improvement in catastrophic medical bills, which coincidentally also affect about 5% of the control group. Yet the folks saying Oregon's sample of diabetics is too small to tell us anything do not think it is too small to tell us anything about catastrophic medical bills. I quote one Kevin Drum:

Medicaid 'nearly eliminated catastrophic out-of-pocket medical expenditures.' This suggests that poor people without Medicaid do get treated for catastrophic problems, but mostly in emergency rooms. Medicaid is certainly an improvement here even if the health outcome is the same.

Why is 5% a big number when we're looking at catastrophic medical bills, but too small to measure when it comes to diabetes? Because the change in catastrophic medical bills was very large; big enough for the study to pick up.

If it was so irrational for researchers to expect to detect a similar effect on objective measures of physical health, then why did literally 0% of the experts predict that this would be the result of this study? I'm talking about academic luminaries like Jonathan Gruber, in many ways the architect of Obamacare, who was involved in this study and pronounced the results "disappointing". Not to mention Austin Frakt and Aaron Carroll, the economists who have led the argument that there's nothing to see here except better financial health. In 2011, Frakt and Carroll were confidently predicting that this study would prove the link between insurance and physical health.

The information that they are now using to declare this study underpowered--the size of the study, and the general incidence of things like hypertension in people between the ages of 18-65--were available back then. Why, then, did they think this study would vindicate their belief in the healing power of health insurance?Because they thought that the improvement in hypertension, cholesterol, and blood sugar control would be bigger. It wasn't. That is big news.

The people arguing that there's nothing to see here have essentially been reduced to the following pronouncements:

1. "There aren't a lot of sick people among the uninsured"

2. "Chronic diseases generally aren't well controlled, even among people who have insurance"

3. "Most people who gain access to health insurance still don't bother to go to the doctor"

I'm not sure why how these are arguments being offered in favor of the proposition that providing health insurance to large numbers of working age people will result in significant improvements in physical health.

Of course, we shouldn't go overboard: Oregon only looked at hypertension, cholesterol, and blood sugar. But let's not go underboard, either. Here's the most common causes of death in America:

These can be grouped into three categories:

Chronic diseases where Oregon found no significant improvement in the major physical markers that medical intervention targets: cardiovascular (BP and cholesterol); cerebrovascular (stroke, which is linked to BP); diabetes.

Things mostly afflict the very old or very young, or others who are already covered by government insurance: kidney disease (dialysis is government guaranteed); non-infectious lung disease (COPD, cystic fibrosis, and asthma); Alzheimer's.

Stuff that insurance might still improve, but we didn't measure it: Accidental deaths and cancer.

I'd argue, however, that accidental deaths, while not measured directly, should have shown up as a near-instant improvement in mortality rates. For working age adults, we would expect improvements to come through improved trauma care and hospital admissions, which means that if they were significant, we'd have seen a big improvement in the mortality figures in the first year of results. We didn't.

Cancer is trickier. In some ways the most plausible avenue for major health improvements, because early diagnosis may help and treatment is hugely expensive.

On the other hand, while we're actually very good at treating hypertension and cardiovascular disease, we're not very good at curing cancer. 30-40% of the people who are diagnosed with cancer die within five years, even with treatment. And since the average age of diagnosis is early 60's, more than half of all the cases are already covered by Medicare.

But we should have a good test starting next year. If adding huge numbers of people to the ranks of the insured substantially decreases their risk of dying from cancer, that should show up as a noticeable discontinuity in American mortality statistics. I certainly hope I'm wrong, and Obamacare represents a landmark advance in the War on Cancer, and death. And I'd be very interested to hear how big an improvement Frakt and Carroll, Beutler and Drum, are expecting to see.

Got a tip? Send it to The Daily Beast here.