Spring has sprung, marking the end of the cold and flu season. Winter isn’t just cold, wet, and dangerous, it also encourages the spread of colds, flu, and Covid. As more people gather indoors, airborne viruses find the perfect conditions to thrive.
Naturally, this surge in illness leads to renewed interest in how to treat a cold effectively, but despite advances in many fields of medical care over recent centuries, the common cold remains stubbornly incurable. Instead of a definitive cure, we’re left managing symptoms to ease discomfort.
So as we leave winter behind, I want to revisit a BBC Future article that was updated and republished at the start of the season, examining the evidence behind common home remedies for treating colds.
BBC Future is part of the BBC’s international online service, covering science, technology, environment, and health. They position themselves as a source of truth, facts, and science – an approach I fully support. So, what do they have to say about the evidence behind home remedies?
Immune supplements
The article begins by noting that many home remedies focus on the idea of boosting the immune system, also noting that for otherwise healthy individuals, immune function is only impaired when there’s a deficiency in essential vitamins or minerals. If your diet is already well balanced, supplements offer little benefit. It’s a valid point – despite the claims of supplement pedlars, supplements won’t supercharge an already healthy immune system. It then goes on to discuss a specific piece of research on this, a pilot study published in PLoS One in 2020.
The study involved 259 participants who were randomly assigned to receive either a supplement (containing vitamins A, D, C, E, B6, B12, folic acid, zinc, selenium, copper, and iron) or a placebo. Over 12 weeks, participants completed weekly surveys tracking any cold symptoms. The results indicated fewer runny noses and fewer coughs among those taking supplements, concluding that this low-cost intervention merits further investigation.
Given the context already discussed (supplements aren’t expected to benefit otherwise healthy individuals without a vitamin or mineral deficiency) one assumes that some fraction of the cohort was mildly deficient in some nutrients, so the supplements here brought their immune function back up to par, reducing the incidence of colds. It’s an interesting finding, but there are a few issues.

The study had a high drop-out rate, with nearly 50% of participants failing to complete the weekly surveys. More concerningly, it did not account for multiple comparisons – a crucial flaw in scientific research.
Studies commonly use p-values to assess whether results are statistically meaningful or just due to chance. The typical threshold is p<0.05, meaning there’s only a 5% chance of observing results like these (or better) if there’s no real effect. However, when multiple outcomes are tested, the likelihood of finding at least one significant result by chance increases. It’s like playing dice: the probability of rolling a six on one attempt is low, but if you roll 20 times, the odds of getting at least one six rise to 97%.
Many studies fail to properly adjust for this, and this pilot study is no exception. The reported improvements for runny noses (p=0.01) and coughs (p=0.04) only hold for a single comparison, but the study also examined the incidence, duration, and severity of headaches, sore throats, congestion, aches, and fever. With so many comparisons, the probability of finding a significant result purely by chance increases dramatically. Just as you can’t roll 20 dice, pick up a six and demand an extra turn, you can’t do 20 comparisons in your study and then talk about how there is only a 1% chance of getting these findings if the supplement does nothing.
While there are a handful of significant effects here, they may well just be noise in the data given the number of comparisons made, and a simple statistical correction for the multiple comparisons eliminates these findings entirely.
To their credit, the authors of the paper acknowledge that the results are not conclusive, calling for more rigorous research. However, these larger, more robust trials have yet to be conducted. Given this, one could argue that it’s premature for BBC Future to reference this paper in an article on the effectiveness of supplements.
Garlic
Next their attention turns to garlic, referencing a 2001 study in which 146 volunteers took either a garlic supplement or a placebo for 12 weeks. The results were striking: the garlic group reported significantly fewer colds; just 24 cases compared to 65 in the placebo group. Cold duration was also notably shorter, averaging just over one day in the garlic group vs five days in the placebo group. The paper even goes so far as to claim that ‘the supplement studied may represent a cure for the common cold.’
These results are so extreme as to be absurd. Are we really expected to believe that participants taking garlic experienced colds lasting only one and a half days? If garlic genuinely cured colds, wouldn’t we have recognised its effects long before 2001? After all, people were chewing willow bark for pain relief for thousands of years before salicin was refined into aspirin. And if we really had discovered that garlic was the cure for the common cold in 2001, where are the Nobel Prizes? Why are we all still getting colds, a quarter of a century later?
Beyond the implausibility of its results, the study also had serious design flaws. The placebo was not taste-matched to the supplement, meaning participants could likely tell which group they were in. This introduces a major source of bias. Another red flag is the author’s claim that garlic ‘may represent a cure’ for colds, since this study wasn’t designed to test garlic as a cure but rather as a preventative, a fundamental difference.
Perhaps the most immediate explanation for these extreme findings lies in the study’s authorship. The lead researcher is the owner of The Garlic Centre, a business that sells garlic supplements – a clear conflict of interest. A 2014 Cochrane review found no reliable support for garlic’s effectiveness against colds, and characterised this paper as ‘poor quality.’ Yet, BBC Future chose to highlight this flawed study as evidence.
Vitamin C and Zinc
In the next section, BBC Future discusses vitamin C and zinc as potential ways to shorten the duration of colds. For vitamin C, they cite two meta-analyses: one that found vitamin C reduces the severity of cold symptoms by around 15%, and a second which they claim suggests that vitamin C supplements are low risk, so there’s no harm in trying them.
However, they fail to mention that this second analysis also found no positive effect for vitamin C. While it does conclude that vitamin C is generally safe, it does not support the claim that it reduces the duration or severity of colds – an omission that I would argue misrepresents the evidence.

The pattern repeats with zinc. BBC Future cites one review which suggests that zinc shortens the duration of runny and blocked noses while also reducing coughing and sneezing. However, a second paper they reference found no such benefit. In fact, some measures in this second paper showed participants in the placebo group do better than those taking zinc, suggesting that in some cases zinc may even be counterproductive. In fairness, this time BBC Future does point this out in their coverage.
Also to their credit, they highlight a key limitation in this type of research: studies rarely, if ever, test whether participants are deficient in vitamin C or zinc before supplementation begins. Any observed benefit could simply be due to correcting an undiagnosed deficiency rather than proving that these supplements provide a meaningful advantage for already healthy individuals. So fair play to BBC Future for recognising that nuance.
What really bothers me is that nearly all the papers on zinc and vitamin C cited by BBC Future come from the same author. Harri Hemilä, a professor at the University of Helsinki, appears to be a strong advocate for vitamin C megadosing and zinc supplementation. Nearly all of his recent publications focus on the supposed benefits of these supplements – not just for colds, but also for Covid, pneumonia, cardiovascular disease, sepsis, asthma, and more.
When he’s not promoting vitamin C and zinc, he’s criticising studies that fail to find positive results. He even authored a paper accusing mainstream medicine of bias against vitamin C, insisting that the evidence in its favour is unambiguously positive but unfairly dismissed. Given this, relying so heavily on his work without acknowledging his fringe stance on the subject is a significant oversight from BBC Future.
Skeptics often criticise false balance in journalism, where fringe and mainstream science are presented as if they hold equal weight. But here, we see almost the opposite problem. Even if we accept that there is some scientific debate about the efficacy of vitamin C megadosing, BBC Future leans overwhelmingly on data from a single researcher – one who openly acknowledges that mainstream science does not support megadosing as an effective intervention.
The placebo effect
Perhaps the most frustrating part of the article is its discussion of the placebo effect. As I’ve outlined in detail, the evidence for any real therapeutic placebo effect is scant at best. The placebo effect excels at manipulating people into reporting large health improvements, but in cases where we can objectively measure purported improvements, we typically find no actual benefit.
With respect to the common cold, BBC Future cites a 2011 study in which 719 participants who had only just caught a cold were randomly assigned to one of four groups. The first group received echinacea, a popular remedy claimed to help with colds, and knew that’s what it was. The second group also received echinacea, but they were not told it was echinacea, making this a blinded echinacea group. The third group received a placebo but believed they were taking echinacea. The final group received no treatment at all.
The primary outcome measures in the study were illness duration and illness severity. Duration was assessed by asking participants if they believe they still have a cold. The number of days they answered ‘yes’ determined the total duration. Illness severity was measured using a standardised questionnaire – a self-reported and subjective assessment completed by the participants twice a day.
Beyond these, the study collected a range of other data points, including stress levels, general health, and an open-ended question allowing participants to report side effects such as diarrhoea, headaches, nausea, rash, and upset stomach. There were also two objective measures: interleukin-8 concentration, which indicates immune response, and neutrophil counts, which reflect inflammation levels. Finally, researchers asked participants whether they believed echinacea works or not.
For readers who may be unfamiliar, echinacea is a popular alternative remedy derived from a type of daisy. While it has been used for centuries to treat colds, there is no good evidence to support its effectiveness in any medical context. However, it is known to interact dangerously with some medications, making its use potentially risky.

BBC Future reported this study as showing a placebo effect, where participants who believed in echinacea experienced shorter and milder colds than those who did not. They further note that this pattern held regardless of whether participants had actually taken any echinacea at all.
However, these findings rely entirely on self-reported data, where patient bias can easily influence results. People who believe they have received an effective treatment are more likely to report an improvement, even if no real improvement has occurred. This is a well-documented issue with subjective measures in clinical research and often overlooked in placebo effect research.
When looking at the objective data – the inflammation and immune response markers – there was no effect at all, either from echinacea itself or from belief in echinacea. Even the subjective findings disappear once a statistical adjustment is made for the large number of comparisons in the study, further undermining the claim that belief in echinacea had any measurable impact.
Nevertheless, BBC Future goes on to cite a second study in support of the first. This second paper, published in 2010, shows that patients who are unaware if they are getting echinacea do not report improved cold symptoms. This may appear to reinforce the placebo effect narrative: participants in the first study who believed in echinacea reported shorter and less severe colds, but those in the second study, who lacked that belief, did not report the same.
However, upon reading this second paper, I experienced a strong sense of déjà vu. It involved 719 participants who had only just caught a cold being randomly assigned to one of four groups – echinacea, blinded echinacea, placebo, and no treatment. It also shared almost all the same authors as the first study.
Because it turns out that this paper is actually the same dataset. The same patients, documenting the same colds – just published in a different journal eight months earlier. In this earlier version, they found no effect. This earlier paper concluded that illness duration and severity were not significantly affected by echinacea compared to the control groups.
Dissatisfied with these findings, the authors appear to have revisited the raw data and conducted subgroup analyses based on whether participants believed in echinacea. They then published these results in a different journal, presenting them as a separate study all about the placebo effect. Notably, the question about belief in echinacea does not get mentioned in the earlier paper, nor is it mentioned in the trial registration.
On the face of it, this appears to be a case of p-hacking, where researchers manipulate data analysis to produce statistically significant results. This is often done by testing multiple hypotheses, selectively reporting favourable results, or adjusting statistical methods until a desired finding emerges. While usually done with the best of intentions, p-hacking undermines the reliability of scientific research by increasing the likelihood of false positives.
Trial registration is a tool which is designed to combat this, keeping researchers honest about what questions they will ask, and what analyses they will perform. In this case, however, it appears to have fallen down; the second paper with the modified analysis simply does not mention that the registration exists.
BBC Future appears to be a genuine effort to present good science, but in this case, it fell short. Rather than offering a careful, critical analysis, the article relied on flawed studies, overlooked biases, and misrepresented key concepts.
Science journalism should do more than just report findings – it should question them. A more skeptical approach would have provided a clearer, more accurate picture of the evidence. Perhaps next time, BBC Future will live up to its stated mission and truly delve deeper.