Skip to main content

What It Was Like To Be Sick In 1884

November 2024
19min read

American medicine in a crucial era was at once surprisingly similar and shockingly different from what we know today. You could get aspirin at the drugstore, and anesthesia during surgery. But you could also buy opium over the counter, and the surgery would be more likely to be performed in your kitchen than in a hospital.

IN 1884 ALMOST three-quarters of America’s fifty million people lived on farms or in rural hamlets. When they fell ill, they ordinarily were treated in their own homes by someone they knew, someone who might not be a trained physician but a family member, neighbor, or midwife. Only a handful of smaller communities boasted hospitals, for they were still a big-city phenomenon. And in those cities, only the workingman and his family, the aged and dependent, the single mother, or the itinerant laborer would normally have received institutional care. For the middle class, a bed among strangers in a hospital ward was a last resort. Even within the working class of America’s rapidly growing cities, the great majority of patients too poor to pay a private physician never entered a hospital but instead received free outpatient care from dispensaries, from paid municipal physicians, and from hospital outpatient departments. The hospital was a place to be avoided—often a place in which to die—and not the fundamental element in medical care that it has become in the twentieth century.

Some of the ills Americans fell victim to a century ago were the same as those we still suffer from—bronchitis, rheumatism, kidney and circulatory ailments; others have become either uncommon, like malaria, less common, like syphilis, or entirely banished, like smallpox. Tuberculosis was by far the greatest single killer of adults; gastrointestinal ills were the greatest scourge among children. Both tuberculosis and the “summer diarrheas” reflect and document the grim realities of a society in which food was sparse for many, work exhausting, living conditions filthy, and sanitation and water supplies well suited to the spread of infectious disease.

Americans still lived in the shadow of the great epidemics. Yellow fever had scourged the lower Mississippi valley only a few years before, and the threat of cholera that had scarred Europe in the early 1880s was just lifting; there was little reason to anticipate that Americans would be spared the devastation of any nationwide epidemic until the onslaught of influenza in 1918. But every year, of course, they had to contend with the usual exactions of typhoid fever, syphilis, malaria, measles, smallpox, and diphtheria.

Life expectancy at birth was a little over forty for the population generally—a bit more than half of what it is today. For those born in large cities it could be much lower. In Philadelphia, for example, life expectancy at birth was 40.2 for white males and 44.8 for white females—and 25.2 and 32.1, respectively, for black males and females. But for those fit enough to survive the hazards of infancy and childhood, life expectancies were not radically different from those prevalent in the United States today. A forty-year-old Philadelphia could expect to live to 65 if a man and to almost 69 if a woman; for blacks the figures were 58.6 and 64. Younger people died of ailments such as measles, diphtheria, diarrheas, croup, and pneumonia. Although a smaller proportion of Americans survived to die of cancer and the degenerative diseases now so important, their experience with them (excepting apparently a much lower incidence of most kinds of cancer) was similar to ours of the 1980s. Older people suffered and died from roughly the same sorts of things they still do.

MOST AILMENTS were, in the terminology of the day, “self-limited.” In the great majority of cases a patient could expect to recover—with or without the physician’s ministrations. This was understood and acted upon; even the wealthy did not ordinarily call a physician immediately except in the case of severe injury or an illness with an abrupt and alarming onset. The decision to seek medical help would be made gradually; first a family member might be consulted, then a neighbor, finally perhaps a storekeeper who stocked drugs and patent medicines—all before turning to a doctor. Many housewives kept “recipe books” that included everything from recipes for apple pie and soap to remedies for rheumatism and croup. Guides to “domestic practice” were a staple for publishers and peddlers. It is no wonder that doctors a century ago were so critical of the care provided by what they dismissed as uneducated and irresponsible laymen.

Perhaps most annoying to physicians was the competition provided by druggists. Pharmacists in cities and small towns often served as primary-care physicians. (In rural areas, on the other hand, doctors often served as pharmacists, buying drugs in wholesale lots and selling them at retail.) Aside from simply recommending patent medicine, druggists might well use prescriptions written by local physicians for the same patient in a previous illness or those issued to another patient suffering from what seemed to be the same ailment. As an indignant doctor put it, this amounted to “surreptitiously appropriating the doctor’s brain and recipe to guillotine the doctor’s income.” No statutes regulated the use of prescriptions, and most laymen felt that, once paid for, a prescription was their property. Logically enough they had the prescription filled whenever “it” struck again. Similarly, no laws controlled access to drugs; no distinction was made between prescription and over-the-counter remedies. Only their pocketbooks limited laymen’s drug purchases. Patients could and did dose themselves with anything from opium to extremely toxic mercury, arsenic, and antimony compounds. It is no accident that some physicians were beginning to discern a growing narcotic-addiction problem and urged control over the sale of drugs. Their critics dismissed such demands as self-serving attempts to monopolize the practice of medicine.

Just as there were no rigid controls over the sale of drugs, so there were almost no legal constraints over medical education and access to medical practice—and it was also a period without health insurance and with an enormous number of working people and small farmers too poor to employ a private physician. America in 18S4. then, was a highly competitive medical marketplace—one in which the number of paying patients was small in comparison with the total number of men (and a few women) calling themselves physicians and seeking to earn a living through practice. A handful of prominent urban consultants might earn as much as ten thousand dollars a year: but this relatively small group monopolized practice among the wealthy. Their far more numerous professional brethren had to scuffle day and night to make a modest living from the fees paid by artisans, small shopkeepers, and farmers. Codes of ethics adopted by medical societies at the time (though enforced only sporadically) were in good measure aimed at avoiding the most brutal aspects of competition: speaking behind another practitioners back, for example, or selling and endorsing secret remedies, or guaranteeing cures. A more subtle tactic involved planting newspaper stories detailing a spectacularly successful operation or unexpected cure.

It was not until the end of the 1880s that the first effective state licensing laws were enforced. Before that almost anyone could hang out a shingle and offer to treat others. From the perspective of the 1980s, even the besteducated physicians had invested comparatively little time or money in their education, while many successful practitioners had trained as apprentices with local doctors and had never graduated from a medical school or seen the inside of a hospital ward. Even graduates of the most demanding medical schools had followed curriculums based on formal lectures with little or no bedside or laboratory training to supplement textbooks and lectures. In 1884 reformers had just succeeded in extending the length of medical school training at the best institutions to three years of classes. But each years session lasted only six months and still failed to include much in the way of clinical training. Furthermore, not even the leading medical schools demanded more than a grammar school education and prompt payment of fees as an admission requirement. A minority in the profession had always found ways to supplement their limited formal education by hiring tutors, traveling to Europe for clinical training, and competing for scarce hospital staff positions. But even such efforts and the financial resources they implied did not guarantee economic success once in practice: patients often chose doctors on the basis of their personalities, not their skills.

Just as there were no rigid controls over the sale of drugs, so there were almost no legal constraints over medical education and access to medical practice—and it was a period without health insurance.

Medicine was a family affair. A successful practice was, by definition, what contemporaries termed a family practice. The physician treated not individuals but households: husband and wife, children and servants, and—in rural areas—farm animals as well. Minor surgery, childbirth, the scrapes, scars, and infectious diseases of childhood, as well as the chronic ills of invalid aunts or uncles and failing grandparents—all were the family doctor’s responsibility. To call in a consultant was, in some measure at least, to confess inadequacy. Of course, in rural areas, and in most small towns, the option of consulting a specialist did not exist, while calling in another local practitioner was to risk the chance that the new doctor might “steal” the first practitioner’s family. A few oblique words could undermine the confidence that a physician had spent years cultivating.

Not surprisingly, business relationships between doctor and patient were casual. A well-organized physician presented bills every few months, but this was regarded as optimistic; surviving financial records indicate that accounts could run on for years with small payments being made from time to time. In some cases only death would bring a final settlement (usually at a substantial discount). Practitioners in rural areas often were paid in kind when paid at all. A young Minneapolis physician in the early 188Os, for example, received oats, hay, cords of wood, a watch, and a dissecting case in lieu of cash, as well as services ranging from haircutting and mowing to buggy repair and housecleaning. The same physician also supplied his neighbors with a variety of drugs and patent medicines and served as house practitioner to a local bordello (at least this account paid cash!).

 

DOCTORS SOUGHT TO IMPROVE their shaky circumstances in various ways. Some offered discounts for prompt payment—and threatened to add interest charges to bills remaining unpaid after ninety days. But most physicians were too insecure economically to take the chance of badgering patients and simply assumed that a substantial proportion of their accounts would never be paid. A rule of thumb was that bad debts should be limited to a third of the total billings.

Physicians also sought to bring some regularity to their economic relationships by adopting fee tables in their local medical societies—schedules specifying minimum charges for everything from ordinary home and office visits to obstetrics and assorted surgical procedures. In Scott County, Iowa, physicians agreed, for example, to charge one to five dollars for “office advice, ” a dollar for vaccination, ten to twenty-five dollars for obstetrics—and ten to one hundred dollars (strictly in advance) for treating syphilis. Fee schedules always included rates for night attendance and mileage, which was particularly important in rural areas where the bulk of a doctor’s time could be spent in travel. (Bicycles were already being suggested as a way of speeding the progress of an up-to-date physician’s rounds.) In some counties physicians adopted an even more tough-minded stratagem: they blacklisted deadbeats—patients able but unwilling to pay. But none of these measures could alter the fundamentally bleak economic realities most practitioners faced.

 

One consequence, however, was a doctor-patient relationship very different from the often impersonal transactions to which we have become accustomed. Economic dependence was only one factor. Most physicians practiced in small communities where they knew their patients not simply as cases but as neighbors, as fellow church members, as people in families.

Physicians arrived at the patient’s home with an assortment of ideas and practices very different from those available to their successors in 1984. Particularly striking was the dependence upon the evidence of the doctor’s own senses in making a diagnosis. The physician could call upon no X rays to probe beneath the body’s surface, no radioisotopes to trace metabolic pathways, no electrocardiograph to reveal the heart’s physiological status. Most diagnoses depended on sight and touch. Did the patient appear flushed? Was the tongue coated? Eyes cloudy? Pulse fast or slow, full or shallow? What was the regularity and appearance of urine and feces? Was there a family history that might indicate the tendency toward a particular illness or an idiosyncratic pattern of response to drugs? Even more important than the doctor’s observations, of course, was the patient’s own account of his or her symptoms. (An infant’s inability to provide such information was always cited as a problem in pdiatrie practice.) The physician’s therapeutic limitations only emphasized the importance of diagnostic and prognostic skills. As had been the case since ancient times, a physician’s credibility and reputation were judged inevitably in terms of the ability to predict the course of an illness.

THIS IS NOT TO SUGGEST that medicine’s diagnostic tools had remained unchanged since the days of Galen. The stethoscope had been in use since the first quarter of the nineteenth century and had been improved steadily; available evidence indicates, however, that many practitioners were not proficient in its use, while some failed to employ it at all. Doctors who had never been taught to use the stethoscope found it easy to dismiss as an impractical frill.

The thermometer also was available to practitioners in 1884, and it was becoming more than an academic curiosity to the average doctor. Its hospital use was growing routine, although the first temperature charts in American hospitals actually date back to the 1860s. The thermometer was adopted slowly because mid-nineteenth-century versions were hard to use, expensive, and seemed to add little to what most physicians could easily ascertain by looking at a patient, feeling the pulse, and touching the forehead; any grandmother could tell when someone had a fever.

For the doctor, the challenge lay in predicting an ailment’s course and suggesting appropriate treatment. The ophthalmoscope and laryngoscope could be of great value in diagnosing ills of the eye and throat; but although they had been known since the 185Os, their use was limited largely to a minority of big-city practitioners. They were still the specialist’s tools, and the regular medical curriculum offered no training in their use. More ambitious physicians also had at their disposal a whole battery of urine tests. (A urinalysis manual of over five hundred pages had, in fact, been translated from the German a few years earlier.) Blood-cell counting was still an academic exercise, but many physicians tested routinely for albumin and sugar in the urine of patients suspected of having kidney ailments or diabetes. In the latter case, availability of a simple chemical test for sugar marked an aesthetic if not intellectual advance over the earlier practice of tasting the urine in question. But the availability of this handful of instruments and laboratory tests had not fundamentally altered traditional medical practice. The era of high-technology diagnosis still lay far in the future.

Most diagnoses depended on sight and touch. Did the patient appear flushed? Was the tongue coated? Eyes cloudy? Even more important was the patient’s account of his or her symptoms.

The physician brought three basic resources to the sickroom. First, of course, was the doctor’s individual presence. In an era when patients with severe, and possibly fatal, infectious diseases were treated under the evaluating eyes of family, friends, and servants, nothing was more important than a physician’s ability to inspire confidence. And contemporary medical doctrine specifically recognized that patient confidence was a key ingredient in the physician’s ability to cure—even in surgery. The second resource available to the practitioner was the contents of the medical bag, the drugs and instruments of the physician’s trade. The final resource lay in the doctor’s mind: the assumptions about disease that explained and justified its treatment.

The doctor’s bag was filled largely with drugs: pills, salves, and powders. Medical therapy revolved around their judicious use; in fact, physicians used the term prescribe for synonymously with treat . Doctors often complained that patients demanded prescriptions as proof that the physician had indeed done something tangible for them. And in most cases patients and their families could see and feel the effects: most drugs produced a tangible physiological effect. Some induced copious urination, while others caused sweating, vomiting, or—most commonly—purging. In addition, the majority of physicians carried sugar pills, placebos to reassure the anxious or demanding patient that something was being done.

PHYSICIANS WERE WELL AWARE that, in part at least, the efficacy of any drug lay in the realm of psychology and not of physiology. The danger, as one warned, was that practitioners might become so conscious of such effects that they would lose sight of the utility of genuinely active drugs. Another practitioner noted that he did not use sugar pills in treating patients in whom he had “confidence”—presumably those better educated and more congenial to him.

In the half-century before 1884, physicians had become increasingly skeptical of the staggering dosage level of drugs routinely employed in the first quarter of the century. Few drugs had actually become obsolete, but mild doses and a parallel emphasis on tonics, wines, and a nourishing diet had come to be considered good practice. Bleeding, too, had dropped out of fashion, though it was still regularly employed in a number of conditions—the beginning of a fever, for example, or with unconscious victims of severe head injuries.

None of this is meant to give the impression that remedies used in 1884 exerted only psychological effects. Even from the perspective of 1984, medicine a century ago had a number of effective tools at its command. Opium soothed pain and allayed diarrhea, digitalis was useful in certain heart conditions, quinine exerted a specific effect on malaria, and fresh fruits relieved scurvy. Vaccination had made major inroads against smallpox (even though technical problems and lax enforcement had made it less than 100 percent effective). Aspirin (although not under that name) had just come into widespread use in the treatment of fevers and rheumatism; it was, in fact, so fashionable that cautious physicians began to warn of its possibly toxic effects. Mercury did have some effect on syphilis, even if it could be dangerous and debilitating. (Some doctors still believed that mercury compounds were not exerting a curative effect until the patient was “salivated”—that is, beginning to show symptoms of what we would now regard as mercury poisoning.)

But the effectiveness of these drugs did not undermine the traditional home and family orientation of medical practice: in contrast to 1984, every weapon in the physician’s armory was easily portable. Contemporaries sometimes complained of difficulties in finding competent nurses and continuous medical help in critical ailments, but neither problem was serious enough to convince middle-class patients that they might best be treated away from their families.

Even surgery was most frequently undertaken in the patient’s home—despite the fact that revolutionary changes already had begun to reshape surgical practice. One source of this change was the rapid dissemination of anesthesia, which by 1884 was employed routinely. The question was not whether to administer an anesthetic in a serious operation but which one it should be; ether, chloroform, and nitrous oxide all had their advocates.

Despite the availability of anesthesia, however, major operations remained comparatively uncommon. Many of the technical problems of blood loss, shock, and infection had not been solved. But the style of surgery certainly had changed, as had the surgical patient’s experience. “Formerly,” as one surgeon explained the change, the “great aim of the surgeon was to accomplish his awful but necessary duty to his agonized patient as rapidly as possible.” Surgeons even timed their procedures to the second and vied with each other in the speed with which they completed particular operations. Now, the same surgeon explained, “we operate like the sculptor, upon an insensible mass.” The best surgeon was no longer necessarily the fastest.

Physicians had difficulty envisaging how microorganisms could bring about catastrophic change in individuals so much bigger. But these views were in the process of rapid change.

But doing away with surgical pain had not removed the more intractable dilemma of infection; by increasing the amount of surgery and length of time occupied by particular procedures, it may actually have worsened the problem of surgical infection. In the mid-1860s the Glasgow surgeon Joseph Lister had suggested that such infection might be caused by microorganisms—ordinarily airborne—and proposed a set of antiseptic procedures to keep these organisms from growing in exposed tissue. Immediate reactions were mixed. At first Lister’s ideas were seen as extreme and wedded arbitrarily to a particular antiseptic, carbolic acid—“Lister’s hobbyhorse” as skeptics termed it. But Lister gradually modified his technique, and by 1884 his point of view had come to be accepted by most American surgeons. This was also the year when surgeons learned that Queen Victoria had awarded Lister a knighthood; he already had become a historical figure.

But the problem of surgical infection was still far from solved in practice. Most surgeons and hospitals paid due homage to Lister but had no consistent set of procedures for keeping microorganisms away from wounds and incisions. Medical memoirs of this period are filled with stories of surgeons operating in their street clothes, of their using dressings again and again without intervening sterilization. Natural sponges were washed and reused. The day of aseptic surgery, in which every aspect of the operating room was calculated to keep contaminating objects as well as the atmosphere away from wounds, was still a decade away.

PART OF THE DIFFICULTY surgeons experienced in understanding Lister’s theories paralleled the more general problem of relating microorganisms to infectious disease: physicians had difficulty envisaging how such tiny living things could bring about catastrophic change in individuals so much bigger. And why did one person exposed to a disease fall victim while another continued in good health?

Tuberculosis was a particularly good example. The single most important disease of the century, in terms of mortality, tuberculosis always had been seen as caused by a combination of constitutional and environmental factors such as diet, work, and cleanliness. The simple announcement that a particular bacterium was associated with the disease could not change these age-old views. One needed both seed and soil to grow a crop, as a frequently used analogy ran: in their enthusiasm for the germ theory, physicians should not lose sight of the fundamental role played by the soil—that is, the individual’s life history and constitutional endowment—in preparing the way for infection. These views would not be changed easily, for they incorporated centuries of acute clinical observation as well as the authority of tradition.

 

Nevertheless, these ideas were in the process of rapid change at precisely this moment. The previous year the German bacteriologist Robert Koch had announced his discovery of the organism responsible for cholera and the year before that, in 1882, of the tuberculosis bacillus. Thus, within the space of two years, one scientist had unearthed the cause of the century’s greatest killer and its most feared epidemic disease. (Cholera killed far fewer than tuberculosis, but its abrupt and unpredictable nature made it particularly terrifying. ) Both discoveries had made front-page news, but it was not yet clear what they meant in practical terms. Like many physicians, most wellinformed laymen were still a trifle skeptical. “Now the microbe may be a very decent fellow, after all, when we get acquainted with him,” as one whimsical observer put it, “but at present I only know him by reputation, and that reputation has been sicklied o’er with the pale cast of thought of medical men … who have charged him with things that we, unprofessionals, look to them to prove. ” Not only did many Americans share such sentiments, but the technical means to turn this new knowledge into public health practice and effective therapeutics still lay in the future.

 

Bacteriological techniques, for example, had not yet become so routine that suspected cases of typhoid or tuberculosis could be diagnosed—and thus made the basis for a program of isolating sufferers. Public health departments, in any case, were unaccustomed to exercising such power or supporting laboratory work. And there were other problems as well. Physicians were still unaware that certain ills might be spread by individuals displaying no apparent symptoms—so-called healthy carriers—or by insects. (Though careful readers of the medical journals in 1884 might have taken notice of the report from a Cuban publication that a Dr. Juan Carlos Finlay had suggested that yellow fever might be spread by mosquitoes—a conjecture proven correct by a team of American investigators at the end of the century.) Perhaps most frustrating to doctors sympathetic to the germ theory was the difficulty of turning this insight into usable therapeutic tools. Knowing what caused a disease, after all, was not the same thing as treating it.

WE CAN HARDLY EXPECT age-old medical ideas to have changed overnight—especially in the absence of new ways of treating patients. Physicians still found it difficult to think of diseases as concrete and specific entities. Fevers, for example, still tended to melt into each other in the perceptions of many doctors; diphtheria and croup, even syphilis and gonorrhea, were regularly confused. It was not simply that such ills were hard to distinguish clinically but that many physicians believed that they could shift subtly from one form to another. Perhaps most interesting from a twentieth-century perspective, physicians still were very much committed to the idea that environmental factors could bring about disease. Every aspect of one’s living conditions could help create resistance or susceptibility to disease; and some factors, such as poor ventilation or escaping sewer gas, seemed particularly dangerous. Stress or anxiety also could produce any number of physical ills. Thus, a leading medical school teacher could explain to his class in 1884 that diabetes often originated in the mind.

Medicine always had found a place for stress and the “passions” in causing disease, and by the early 188Os physicians had begun to show an increasing interest in what we would call the neuroses: complaints whose chief symptoms manifested themselves almost entirely in altered behavior and emotions. Such ills were becoming a legitimate—in fact, fashionable—subject for clinical study. Depression, chronic anxiety, sexual impotence or deviation, hysteria, morbid fears, recurring headaches all seemed to be increasing. One particularly energetic neurologist from New York, George M. Beard, had just coined the term neurasthenia to describe a condition growing out of environmental stress and manifesting itself in an assortment of fears, anxieties, and psychosomatic symptoms. Beard himself believed that America was peculiarly the home of such ills because of the constant choice and uncertainty associated with the country’s relentless growth.

This interest in emotional ills is not important only in and of itself; it is also evidence of the significance of a new kind of specialist, the neurologist. Even if such bigcity practitioners represented only a small minority of the total body of physicians, and even if they treated a similarly insignificant number of patients, these specialists were forerunners of a new and increasingly important style of medical practice. By 1884 urban medicine was in fact already dominated by specialists—by ophthalmologists, orthopedic surgeons, dermatologists, otologists and laryngologists, obstetricians and gynecologists, even a handful of pediatricians—as well as the neurologists. Many such practitioners were not exclusive specialists; they saw patients in general practice, but their reputations—as well as the bulk of their consulting and hospital practice—were based on their specialized competence. They were the teachers of a new generation of medical students, and it was their articles that filled the most prestigious medical journals.

Physicians still found it difficult to think of diseases as concrete, specific entities. Fevers tended to melt into each other; diphtheria and croup, even syphilis and gonorrhea, were often confused.

The availability of such new tools as the ophthalmoscope plus an avalanche of clinical information meant that no individual could hope to master the whole of clinical medicine as might have been done a half-century earlier. Coupled with the realities of an extremely competitive marketplace, this explosion of knowledge guaranteed that the movement toward specialization would be irresistible. Significantly, however, ordinary physicians remained suspicious of specialists, whom they saw as illegitimate competitors using claims to superior competence to elbow aside family physicians. In 1884 the handful of specialist societies already in existence were well aware of such hostility, and most adopted rules forbidding members from advertising their expertise in any way.

THERE WERE OTHER STRAWS in the wind indicating the directions in which medicine was to evolve. One was the gradually expanding role of the hospital. Though still limited almost entirely to larger communities, hospitals were playing an ever larger role in the provision of medical care to the urban poor. In 1873, when America’s first hospital survey was undertaken, it found that only 178 hospitals of all kinds (including psychiatric) existed in the United States; by 1889 the number had risen to over 700. Even in small towns, local boosters and energetic physicians were beginning to think of the hospital as a necessary civic amenity; by 1910 thousands of county seats and prosperous towns boasted their own thriving community hospitals.

America’s first few nursing schools provided another, if less conspicuous, straw in the wind. The movement for nurse training was only a decade old in the United States in 1884 and the supply of trained nurses and number of training schools pitifully small; the 1880 census located only fifteen schools, with a total of 323 students. But the first products of these schools (reinforced by a handful of English-trained administrators) were already teaching and supervising the wards in many of our leading hospitals. They were also beginning to provide a supply of nurses for private duty in the homes of the middle and upper classes.

Before the 1870s, both male and female nurses ordinarily had been trained on the job in hospitals or, even more informally, alongside physicians in the course of their private practice. But no matter how long they practiced or how skilled their ministrations, such individuals were inevitably regarded as a kind of well-trained servant. By the time of the First World War, the hospital and the nurses who staffed, taught, and trained in them had become a fundamental aspect of medical care for almost all Americans, not just for the urban poor.

Some aspects of medicine have not changed during the past century. One is a tension between the bedside and the laboratory. At least some clinicians in 1884 were already becoming alarmed at the growing influence of an impersonal medical technology. Even the thermometer could be a means of avoiding the doctor’s traditional need to master the use of his eyes, ears, and fingertips—and thus disrupt the physician’s personal relationship with the patient. Such worries have become a clichéd criticism of medicine a century later; the growth of technology seems to have created an assortment of new problems as it solved a host of old ones.

But whatever the physician’s armory of tools, drugs, and ideas, some aspects of medicine seem unlikely to change. One is death. Euthanasia was already a tangible dilemma in 1884—when the physician’s technical means for averting death were primitive. “To surrender to superior forces,” as one put it, was not the same as to hasten or induce the inevitable. “May there not come a time when it is a duty in the interests of the survivors to stop a fight which is only prolonging a useless or hopeless struggle?”

Some of our medical problems have not been solved so much as redefined; and some have changed only in detail. A century ago an essay contest was announced in Boston; its subject was the probability of a cure for cancer. No prizes were awarded.

… BUT IT WAS WORSE IN 1784

Enjoy our work? Help us keep going.

Now in its 75th year, American Heritage relies on contributions from readers like you to survive. You can support this magazine of trusted historical writing and the volunteers that sustain it by donating today.

Donate