Challenge experiments that involve infecting healthy human subjects as a means to test the efficacy of a new vaccine can be invaluable. Great strides in understanding how to treat and prevent such infectious diseases as smallpox, yellow fever, malaria, and influenza have resulted from research involving human beings — both volunteers and those who were “volunteered” to participate. The history of medicine is studded with episodes in which children, men, and women were deliberately infected with pathogens in the hopes of elucidating ways of mitigating, preventing, or curing infections — an approach that continues to be pursued today, as evidenced by the study reported by DeVincenzo et al. in this issue of the Journal (pages 711–722).
Many of the early and most notable challenge experiments involved the scourge of smallpox, a ferocious and often fatal disease. In 1721, Lady Mary Wortley Montagu returned to London from Turkey with the news that variolation — deliberately introducing smallpox pus into the body to produce a milder-than-usual case of smallpox — could also confer lifelong immunity to the disease. After Montagu had her physician variolate her own daughter, she encouraged the royal family to adopt the procedure. In August 1721, physician Charles Maitland received royal permission to undertake a demonstration of variolation at Newgate Prison. Promised pardons, six prisoners underwent variolation in the presence of court physicians. The prisoners survived both the variolations and challenges in the form of subsequent exposures to persons with smallpox. Once Maitland had repeated the experiment in orphans, two princesses were variolated in 1722. These demonstrations did much to spur the adoption of variolation, and in 1757, the variolated included 8-year-old Edward Jenner.
Many physicians are familiar with Jenner’s discovery that protection against smallpox could be conferred by infecting (vaccinating) people with cowpox. After vaccinating the son of his gardener in 1796, Jenner challenged the protection by exposing the boy to smallpox. Jenner’s announcement in 1798 that vaccination conferred immunity to smallpox after first causing a milder infection and without spreading smallpox to others inspired challenge experiments in London, Paris, Vienna, and Boston. At a hospital on one of Boston’s harbor islands, Harvard professor Benjamin Waterhouse vaccinated 19 boys and, 3 months later, inoculated them with smallpox pus; none of them developed the disease. Then he inoculated 2 healthy but unvaccinated boys with smallpox; these 2 boys developed smallpox, but when challenged by exposure to them, none of the vaccinated boys developed the disease.1 Waterhouse became an uncompromising champion of vaccination — and it is, of course, thanks to widespread vaccination that smallpox has since been eradicated.
Beyond smallpox, the advent of the germ theory of disease fostered a raft of experiments to establish the causative agents of other diseases, and in the absence of animal models, such research often culminated in demonstrations in which disease was induced in healthy human beings by means of a purified culture of the germ in question. Researchers seeking to establish the utility of vaccines for measles, mumps, pertussis, and tuberculosis used challenge experiments to test the degree of protection conferred by each new vaccine.
In some cases, physicians turned to members of their own families to serve as experimental subjects. In the 1930s, for example, pediatricians Hugh and Edith MacDonald injected two of their four sons with pertussis vaccine. To assess its efficacy, the two then “sprayed whooping cough microbes into the noses of all of them.” The two older boys (called “vaccinated volunteers” by their parents) remained free of whooping cough. The younger children, twin 6-year-old boys, developed severe coughs, paroxysms, and the whoops associated with the disease.2 Much more frequently, however, researchers in the first half of the 20th century turned to institutionalized children, soldiers, and prisoners as convenient populations for development and testing of new vaccines and for the study of such infectious diseases as gonorrhea, syphilis, and hepatitis.
Unfortunately, researchers sometimes undertook such efforts with little attention to the ethical concerns raised by purposefully making people sick. In addition to experiments that involved intentionally infecting human subjects with pathogens, experimenters have mounted other “challenge” experiments, including interventions explicitly intended to disrupt normal psychological functioning. Whatever the challenge agent, the intent to create unpleasant symptoms, disease, and discomfort has made challenge experiments a controversial approach to clinical research. One of the challenges for contemporary challenge studies, therefore, is successfully negotiating the balance between the need to advance biomedical understanding and the imperative to respect the welfare and autonomy of participants.
Nearly 50 years ago, in 1966, Henry K. Beecher made an urgent plea in the Journal for serious attention to the increasing number of ethical errors in American clinical research. He outlined 22 “unethical or questionably ethical studies”3 — including, for example, a clinical experiment of the 1960s in which healthy mentally retarded children at the Willowbrook State School in New York, a chronically underfunded institution with high rates of hepatitis, received intramuscular injections or drank milkshakes containing hepatitis virus so that investigators could monitor the disease in an effort to develop effective means for preventing hepatitis or lessening its severity.4
Over the course of the 20th century, as American society gradually expanded the scope of what they considered moral questions, bringing new attention to the rights of children, women, prisoners, and racial and ethnic minority groups, medical researchers similarly sought to protect vulnerable subjects. Beecher’s article helped to prompt federal legislation, the creation of national ethics commissions, and the generation of new guidelines and organizations to protect the rights of research subjects. These included requirements for all prospective studies to undergo review by an institutional review board, requirements for researchers to obtain written informed consent, and special regulations for pediatric research. In the past two decades, concern about healthy volunteers has also occupied federal policymakers. The 2001 death at Johns Hopkins of healthy volunteer Ellen Roche, who had received a challenge agent in an asthma study, and the 1999 death of Jesse Gelsinger at the University of Pennsylvania prompted temporary closures of several research facilities.
Nevertheless, challenge experiments remain an important methodologic approach in the study of such infectious diseases as malaria, cholera, and influenza and in the investigation of such psychiatric disorders as schizophrenia and post-traumatic stress disorder — even as they challenge investigators to provide a compelling rationale for undertaking interventions that intentionally cause disease and discomfort. All experiments involving humans must be thoughtfully planned and carefully evaluated in terms of risk to subjects and the social value of the knowledge to be gained. In addition, subject selection needs to be equitable, and participants need to be informed and adequately compensated. They must be allowed to withdraw from the study at any time.5 Research involving challenge experiments imposes the same responsibilities on investigators, who must also contend as best they can with the greater visibility that intentional infection brings to biomedical science.