This article originally appeared in The Skeptic, Volume 3, Issue 1, from 1989.
Interest in ‘alternative medicines’ is said to be increasing. However, a common criticism of alternative medicines is that there is little evidence that they work. Medicine is often accused of having double standards on this point, as many orthodox treatments have not been subjected to rigorous trials. To see why this accusation is unfounded, the nature of medical evidence should be examined in more detail.
There are two types of evidence that a treatment works: confirmatory and supportive, the former being stronger evidence than the latter. Valid clinical trials are the surest type of investigation, ideally using randomised double-blind controlled methods. A double-blind trial compares an experimental remedy with a placebo — an inert substance — which is made to look like the remedy being investigated. Neither the patient nor the assessor of the patient’s progress know whether active medicine or placebo is being taken. Patients are assigned to either the placebo or experimental group randomly, and quite large numbers are required to be sure of statistically significant results. Such trials are the only truly scientific method of assessing the value of a remedy.
However, it is difficult to conduct such trials, and in the interim, patients must be treated. For example, we must await the results of double-blind trials for appendicectomies. In the absence of confirmatory evidence, the only alternative is a less certain method — an informed judgement on the basis of science and knowledge of principles of human physiology, pathology and pharmacology. Supportive evidence is collated from laboratory work and indirect clinical research from numerous disciplines. A less precise but valuable view can thus be formed — and we continue to remove inflamed appendices.
It is important to contrast both these methods with clinical impression. ‘I have seen N patients with condition X, and treated them all with remedy Y, and my impression is that they have all benefited’, say the practitioners. The recognition that such impressions are totally inadequate as the basis of assessment of remedies was a great step forward for medicine — a step none of the alternative therapies have taken.
Alternative practitioners excuse the absence of proper trials by saying that their treatments are too personalised, too tailored for specific patients to allow randomised, double blind trials. This might be a true description of their practice—but beneath it lies a serious question. If such specific, sensitive remedies are concocted anew for each patient, where does the knowledge come from which enables this?
Homeopathy serves as a good example with which to examine this point more carefully. A patient suffering from condition X approaches a homeopath. The homeopath listens to the patient (which, despite much propaganda to the contrary, most doctors would!) and arrives at the conclusion that distillate of dogwort is required. There is however, no proper research to show that this substance is effective for condition X — only a 150-year-old suspicion that undiluted dogwort causes similar symptoms to condition X when swallowed by healthy subjects. How does the practitioner know to use this substance? The stock answer is that the remedies are chosen by the application of principles, the law of similars in the case of homeopathy, and a knowledge of the homeopathic pharmacopia. If, in the absence of controlled trials, this is to be considered an acceptable reply, then there must be some evidence that the principles of the practice are valid ones.
Given the paucity of in vivo evidence, some alternative practitioners (although not many!) turn to laboratory research in their hunt for evidence—to support their claims to be using valid principles. Last year Nature published a paper which claimed to provide in vitro evidence for an effect which could have helped to explain homeopathy—the start of the Benveniste fiasco. The research appeared to show that basophil degranulation (an immune response in white blood cells) continued to be triggered by solutions of an antigen even to concentrations of 10-120, However, this was followed shortly afterwards by a damning report from a team of investigators who found serious errors in the research methods involved, invalidating the research.
If treatments are not proven, they are experimental. It is surely highly questionable to make people pay for such remedies, in addition to the health risks involved. Examining the field of alternative medicine, one is left with a bleak impression. There seems to be a collection of remedies with no evidence of their efficacy, being selected according to principles without foundation. Those who would consider such remedies should beware—but is it right that the principle of caveat emptor (let buyer beware) should apply to health care? I think not. I believe it is time for better regulation of this field.