Tech

How Drug Companies Game the Placebo Effect

Statbusters

Randomized controlled trials are supposed to be the gold standard of drug testing—but companies can easily tweak the system in their favor.

articles/2015/11/03/how-drug-companies-game-the-placebo-effect/151102-gelman-stats-placebo-tease_yz2twf
iStock

The placebo effect is a psychological mystery. The act of taking medicine imparts a bonus benefit to patients beyond the gains from the medicine itself: this is, on first contact, a strange concept, an experimental discovery that surprises but in a positive way. Once you accept that healing is in part psychological, you begin to wonder whether sugar pills could cure your migraine or mental gymnastics could keep you fit.

Those sugar pills have become a fixture in the drug development industry. Public health regulators back out the placebo effect when measuring the benefit of new drugs. For example, in the research paper highlighted in a recent BBC report, the average patient who took the drugs reported a 35 percent decrease in pain but only 17 percent was attributed to the drugs, because the placebo effect came in at 18 percent.

The aforementioned BBC article was one of a series of recent reports that portray the placebo effect as a thorn in the backside for drug developers. These companies are troubled by an increasing number of drugs that fail to beat the placebo standard. They are mystified by why the placebo effect is strengthening over time.

ADVERTISEMENT

Now, when the distance between you and a train is shrinking, is the train coming towards you, are you walking towards the train or are both moving towards each other? We think the question is more complicated than advertised.

Instead of garnering sympathy for drug manufacturers, the BBC article has the unintended consequence of revealing some skeletons in the drug testing and approval process.

Drug testing uses a randomized controlled trial (RCT) protocol, which has been praised in the statistical community as the gold standard for establishing cause-effect. In the simplest version of RCT, researchers recruit patients to participate in the trial. The patients are randomly assigned to either the treatment or placebo arm; those in the latter receive sugar pills. Neither the researchers nor the patients are told who have been given the sugar pills.

The ingenious idea of randomizing the patients means that any unforeseen risk factor equally affects either arm of the experiment, thus the average net effect is zero. Let’s say a drug is less effective for color-blind people and for women. Because gender is a standard demographic variable, the typical RCT design ensures both arms contain a statistically identical mix of men and women. Imagine this isn’t so, and you have fewer women in the treatment arm. Then the effect of the drug is attenuated by the effect of women.

In our example, researchers did not anticipate the issue of color-blindness. This does not matter because of the random assignment: the two arms should have substantively equal prevalence of color-blindness, leading to no net effect on the average response.

Statisticians have good reasons to feel proud about inventing RCTs. Reading the BBC article, however, makes us worry.

We are reminded that drug companies have a strong financial incentive to game drug testing. To develop a new drug, they invest over a billion dollars and many years, with the reward contingent upon negotiating a sequence of RCTs, which opens the path to a patent-protected monopoly.

In addition to supplying the motive, the reporter documents a set of tools used by drug developers to skew test results in their favor.

First, the drug companies try to shape the patient pool in RCTs. A cottage industry has emerged, employing “professional patients” who train themselves to specifications of RCTs, like people training for marathons. They sometimes fudge data to qualify for clinical trials. A few years ago, Wired magazine wrote a fascinating profile of these so-called “Drug Test Cowboys”, who could make over $50,000 a year.

Second, the reporter talked to consultants who help drug manufacturers “avoid [drug] trial failures.” You might think the help consists of improving record-keeping, making sure patients follow the treatment protocol, and so on, but no, the tactics mentioned in the report seem rather unseemly. There is a lot of talk about artificially suppressing the placebo effect. The consultants, we are told, train scientists to avoid doing “things that enhance expectations” of trial participants. Examples of discouraged behaviors include “inappropriately optimistic” body language, looking patients in the eye, and shaking patients’ hands.

Third, much of these activities are outsourced to contract research organizations (CROs). These organizations make decisions based on the profit motive. The interviewees in the BBC report freely speculate about how CRO workers could influence trial outcomes.

We think the clients of these consultants and CROs may be fooling themselves because if the treatment assignment is double-blind and random, these tactics may affect both arms equally. Their success appears to rely on doctors or patients deducing the treatment, or the placebo effect varying with the treatment.

By the end of the article, the reporter is musing about whether doctors should take the opposite tack of projecting confidence, in the hope of strengthening the placebo effect. We would not be surprised that gaming occurs at different stages of a clinical trial, with researchers surgically manipulating expectations of subjects. This strategy is particularly attractive when the health metric is subjective, such as rating one’s pain level on an 11-point scale.

It has often been said that we live in the golden age of data. We have “found data” and “data exhaust” everywhere. One day, the data will lose their sheen of innocence. Analysts will no longer see them as “found” or “exhaust.” They will realize the data have been gamed and manipulated. We are learning the hard truth even in the running of RCTs, that gold standard of causal research. The statistics community has yet to come to terms with this emerging reality.

Got a tip? Send it to The Daily Beast here.