Even among scientists and science communicators, it’s often claimed that the placebo effect is powerful, clinically useful, and demonstrates the incredible control of the mind over the body. I would argue that this is not fully supported by the literature, which shows a placebo effect that is at best unreliable, and perhaps indistinguishable from bias.
One stark illustration of this was a study published in the New England Journal of Medicine in 2011, examining the placebo effect in asthma. This study has been discussed in some detail in The Skeptic before, but we can reiterate briefly.
Forty-six asthmatic patients were randomised to receive either a real inhaler, a fake inhaler, sham acupuncture, or no treatment. The real inhaler improved lung function by 20%; the other groups only showed around a 7% improvement. There was no placebo effect here, the fake interventions had the same effect as no treatment. The marginal improvements in those groups can be attributed to effects like regression to the mean.
However, when asked how much better they felt they were, patients told a different story. Recipients of both real and fake inhalers reported around a 50% improvement, despite the lung function tests showing there were no meaningful improvements for any of the sham groups. The discrepancy highlights the powerful role of bias: patients were reporting improvements they didn’t really have. Without the objective measurements for comparison, we might have mistakenly concluded that inhalers are a waste of time, since their sham counterparts are just as effective.
Another commonly cited benefit of the placebo effect is the effect of branding. One study used as a basis for claims like this was published in the British Medical Journal in 1981, under the title ‘Analgesic Effects of Branding in the Treatment of Headaches’, and authored by Branthwaite & Cooper.
Researchers recruited 835 women who reported using painkillers at least once a month and split them into four groups. The first group were given 50 aspirin in the packaging of a recognisable aspirin brand. The second group were given 50 aspirin in plain packaging labelled ‘analgesic tablets’. The third group were given 50 dummy pills, in the branded packaging. The final group were given 50 dummy pills labelled ‘analgesic tablets’.
Over a two-week period, the women were told to take two tablets from the box any time they have a headache. They should record how much better they felt after 30 minutes and then again after one hour. Scores were recorded on a six point scale: ‘worse’, ‘the same’, ‘a little better’, ‘a lot better’, ‘considerably better’, and ‘completely better.’
Branthwaite found that after one hour users of the unbranded dummy pills reported a mean pain relief of 1.78 while users of branded dummy pills reported a mean pain relief of 2.18. Unbranded aspirin scored 2.48 and branded aspirin scored 2.7. They conclude that branded tablets were significantly more effective than unbranded in relieving headaches, and that these effects were due to the increased confidence in obtaining relief from a well known brand.
I believe this conclusion to be overstated.
There are some methodological limitations in the study. Participants were recruited by going door-to-door, which means some participants may have known each other and been able to communicate about the study design. Branthwaite tries to control for this by requiring recruiters to skip ten houses after every successful sign-up, but the possibility remains.
The number of participants involved is good, but the duration of the study is relatively short. Moreover, when some women had not recorded any headaches over the two weeks, they were permitted to extend the trial for a further two weeks. The dummy pills were also not taste-matched to the real aspirin, so anyone familiar with the taste of aspirin, especially users of that brand, would be able to identify they were getting fakes.
The largest issue, however, is the conflation of an effect with a reported effect, a problem I understand to be common in the medical literature at large, not just in placebo effect research. While it may genuinely be the case that taking pills from a branded packet truly affords greater analgesia than an equivalent pill from an unbranded packet, it could equally be the case that taking a pill from a branding packet merely makes one more likely to report greater analgesia regardless of any true change in physiological pain.
The data in Branthwaite cannot tease these scenarios apart. Think back to the lung function tests: patients reported huge improvements in their lungs, apparently on the basis of bias alone, when actually there was no meaningful improvement. In Branthwaite, we have no objective measure to check against so we cannot be sure whether the reported differences between branded and unbranded pills reflect real pain relief.
It is fair to say that pain represents perhaps a unique case in placebo research, as there are completely fair and reasonable questions to be asked about whether a change in pain and a change in the perception of pain are meaningfully different. However, even the perception of pain is still different to reported pain. Patients could experience identical levels of perceived pain, and yet still report them differently because of the role of bias.
While this may seem like a trivial distinction, it has real-world implications. In 2016, Australian regulators fined Reckitt Benckiser, the makers of Neurofen, several million Australian dollars for selling identical painkillers branded and priced differently to target specific types of pain. Consumers were being charged a premium for products like Neurofen Tension Headache or Neurofen Period Pain, when the active ingredient and dosage were actually the same across variants.
Despite this, some advertising agencies continue to advocate for selling such products at inflated prices. They cite studies like Branthwaite to argue that Neurofen Period Pain would genuinely work better than regular Neurofen or generic ibuprofen for period pain, because of how it is branded. The placebo effect validates the claim, and so justifies the premium.
Many claims about the placebo effect assume that the placebo itself is responsible for the observed outcomes but, as these examples show, the effects we attribute to placebos are often a mix of statistical effects, patient bias, and other artefacts of the research process. For this reason, we should be both vigilant and cautious when evaluating the clinical relevance of placebo interventions.