Skip to main content

How Prozac Finished Off Freud

December 2024
19min read

In the past hundred years, psychiatry has come full circle: Psychoanalysis lost; medicine won.

Not long ago, I was lecturing in my course on medical history about people who had accused themselves of smearing feces on a crucifix or committing some equally sacrilegious act. In fact, their beliefs had been delusional. They had done nothing of the kind, but one manifestation of their illness was this untrue self-reproach.

After the lecture, an older student, a woman in her thirties, came up to me and said, “I’d like to talk to you.”

We went back to my office. She said, “You know those patients you mentioned with that kind of idea? I’ve been having those same thoughts myself.” She had been depressed.

“Are you being treated by someone?” I asked.

She nodded. “I’ve been seeing a psychoanalyst.”

My heart sank. Of all the treatments available for such a complaint, she had chosen the worst.

For her, psychiatric problems meant seeking psychoanalytic help, because she thought the terms were synonymous. Psychiatry, psychoanalysis, psychology—aren’t they really all pretty much the same thing? Millions of people think so.

In fact, psychiatry is the branch of medicine that specializes in diseases of the brain and mind, excluding the obviously organic ones that neurology treats. Psychology is the science of the mind, and psychological treatment consists mainly of psychotherapy. Psychoanalysis is the particular method of psychotherapy developed by Sigmund Freud in turn-of-the-century Vienna, and today it is virtually dead within psychiatry. The average psychiatrist will never ever ask patients about their dreams. Half of all visits to psychiatrists in America now end with the prescription of a psychoactive drug, such as the antianxiety drug Xanax or the antidepressant Prozac. Indeed, the discipline has changed radically in the past thirty years. Not one of the psychiatric verities of a couple of generations back has survived intact.

Few Americans are aware of these changes, yet such is the importance of psychiatric illness that they amount to one of the most important stories of the century. Serious mental illness in American society is literally as common as gallstones. According to the National Center for Health Statistics, five in every thousand Americans will be hospitalized for psychosis every year, almost exactly the same rate as for cholelithiasis.

Among women in Western countries, major depression is by far the commonest illness. According to Harvard’s new School of Public Health Global Burden of Disease study, five of the top ten causes of chronic disease in women are psychiatric. (In addition to depression, they are, in order of frequency, dementia, schizophrenia, manic depression—now called bipolar disorder— and obsessive-compulsive behavior.)

Men aren’t depressed as often as women, but they have the lion’s share of other psychiatric problems. They suffer two to three times as many substance-abuse disorders as do women, and they hold a kind of monopoly on antisocial personality disorder, getting into bar fights and the like.

Altogether, psychiatric problems are just about as common as the common cold, and how they have been confronted over the century is a major story—a heartbreaking one when you consider how such treatment was mishandled in the past, a heroic one when you consider how much better, and with what scientific audacity, we are dispensing it now. We have come a distance in thirty years, but in a sense we have come full circle. We have returned to the turn-of-the-century view of psychiatric diseases as medical problems requiring medical solutions. Only now, we have many solutions that work.

From its very beginnings, psychiatry has been torn between two visions of mental illness. One stresses the neurosciences, with their interest in brain chemistry, brain anatomy, and medication, seeing the origin of psychic distress in the biology of the cerebral cortex. The other looks to patients’ lives, attributing their symptoms to social problems or personal stresses to which they have adjusted imperfectly. The neuroscience version is usually called “biological” psychiatry; the social-stress version makes great virtue of the “bio-psychosocial” model of illness. Yet, even though psychiatrists may share both perspectives, when it comes to treating individual patients, the perspectives themselves really are polar opposites. Either one’s depression arises from a biologically influenced imbalance in one’s neurotransmitters, perhaps activated or intensified by stress, or it reflects some psychodynamic process in one’s unconscious mind. It is thus of great importance which vision dominates psychiatry at any given moment.

The broad and thorough failure of early brain science opened the door for theories of the pathological influence of early-childhood events.

This bifurcation of vision was present at the very beginning of the discipline’s history, but at the start the biological version had the upper hand. In 1758, the Englishman William Battie worked out a complicated explanation of mental illness that put, at its root, muscular “spasms” that led to “laxity” of the blood vessels of the brain, which in turn caused “obstruction” of those vessels, an ensuing “compression” of the nerves, and delusive sensations. In America, the great physician Benjamin Rush wrote in a study of psychiatry published in 1812 that “the cause of madness is seated primarily in the blood vessels of the brain, and it depends upon the same kind of morbid and irregular actions that constitute other arterial diseases.”

Throughout the nineteenth century, such scientific psychiatry was dominated by German heavyweights—the founders of scientific medicine in general—who tried to explain psychiatric symptoms as a result of lesions in the brain. They did not get very far because they were searching with the wrong tools. At the level of the microscopes available to them, for example, one finds almost no evidence of pathology in the autopsied brains of mentally ill patients whose lesions we can detect today with such high-tech imaging devices as CT and MR scanners. They also did not yet understand about brain chemistry—that is, receptors and neurotransmitters. As a result, they came up with little linking mental symptoms to underlying brain disease. Moreover, despite a battery of treatments that ranged from hydrotherapy to dietetic therapy, their efforts to effect real cures were almost invariably disappointing. Their more intractable cases went to asylums and stayed there.

This broad and thorough failure opened the door for theories of emotional causes of mental illness, and so psychoanalysis, which stressed the pathological influence of early-childhood events, began its ascendancy. Of course, the titan of the discipline was Sigmund Freud. His theory held that repressed childhood sexual memories and fantasies caused neurosis when reactivated in adult life. Such neurosis could be cured by an elaborate technique emphasizing dream analysis, free association, and the working through of a “transference neurosis,” in which the analyst represents one of the patient’s parents as a love object, the patient then living out and working through attitudes from childhood.

Many histories of psychiatry describe psychoanalysis as the end of the story, the goal toward which all previous events marched. For a brief time in the middle of this century, middle-class society was enraptured by the notion that psychological problems arose from unconscious conflicts over long-past events. Psychoanalysis had a powerful appeal for both doctors and patients, for the latter because with doctor and patient cooperating in the enterprise of soul-searching, the impression arose that the patient was being cared for emotionally. The treatment offered a situation in which patients basked in an aura of concern while seeking powerful emotional and poetic truths.

For doctors, the advantage was even clearer: Freud’s psychoanalysis provided a way out of the asylum. The practice of depth psychology, founded on Freud’s views, permitted psychiatrists for the first time ever to establish themselves as an office-based specialty and to wrest psychotherapy from the neurologists, who worked out of private offices and had started private-practice psychotherapy.

In September 1909, Freud came to the United States to lecture at Clark University. The tour launched the psychoanalytic movement in America. By 1935,  Fortune magazine could soberly explain that “the suppression of the sexual instinct in childhood pushed certain experiences and desires deep into the unconscious, where they reappear in the adult as neuroses.” For the first time in history, the depressed businessman or the anxious housewife might seek out the services of a psychiatrist, and if the psychiatrist lived in New York, Boston, or Washington, chances were that he would offer psychoanalysis.

But what ultimately converted a fashionable therapeutic boomlet into a mass ideology shaping almost every aspect of American thought and culture was the Holocaust. In the 1930s, fascism drove many analysts who were Jewish from Central Europe to the United States, where they lent the stripling little American movement the glamour and heft of the wide world. More and more, young Yankee psychiatrists turned away from the German scientific school and toward depth therapy.

 

Soon psychiatrists were winning a monopoly over the young therapy. In the public mind psychotherapy and psychoanalysis became virtually synonymous. The American Psychoanalytic Association insisted that only psychiatrists, all of whom are M.D.'s, could be trained as analysts. By the early 1960s, the triumph of analysis seemed complete.

When the psychiatrist Herbert Meltzer did his post-M.D. training (his residency) in the mid-1960s, the Massachusetts Mental Health Center was at the epicenter of American psychiatry. “It was dominated by psychoanalysts,” he recalls, “who were totally committed to psychodynamic psychiatry. Even their approach to schizophrenia was psychoanalytic.”

Around that time Donald Klein, later a prominent biological psychiatrist, was working at New York’s Hillside Hospital, whose two hundred beds all were reserved for psychoanalytic therapy. Already dubious about analysis, Klein was the only doctor in the hospital allowed to write prescriptions for medication. “They’d call me and say, ‘Look, I want Mrs. Jones on Thorazine [an antipsychotic drug], two hundred milligrams,’ and I’d say, ‘Why?’ and they’d say, ‘Well, she’s schizophrenic,’ and I’d say, ‘Yeah, but she’s been here how long, ten months, and she’s been schizophrenic all along. Why do you want to give it now?’

“They’d say, ‘Well, she hasn’t responded to the therapy and we think that it’s probably a good idea.’ When you’d talk to the patient about it, the patient would say, ‘It’s okay by me, but it means the doctor has given up on me.’ When you’d talk to the ward staff about it, they’d say, ‘She kicked a maid last week, and we’re not going to put up with that.’”

Hospitalization was not only confused in its purpose; it was also exceptional. The world of psychoanalysis turned on the axis of Main Street practitioners in private offices treating middle-class patients for unhappiness. Wealthier patients might see the inside of a place like Hillside Hospital, where they would receive talking therapy. Those with grave problems were forgotten in red-brick state mental hospitals, beyond the interest of psychoanalytic psychiatry.

Then, in the 1960s and 1970s, the psychiatric world began to abandon psychoanalysis and revive the nineteenth-century tradition of psychiatry as neuroscience. Crucial events leading to this neuroscientific revival had begun much earlier, with systematic experimentation to study the chemistry of the brain. Brain chemistry means neurotransmitters, the chemicals that carry nerve impulses from one neuron, or nerve cell, to another, traveling via the synapse, the gap between the neurons. Although such research went back to turn-of-the-century English physiologists, it was only in the early 1920s that Otto Loewi, a professor of pharmacology at the University of Graz, isolated the first neurotransmitter, and not until 1926 that he was able to say that a chemical called acetylcholine mediated the transmission of nerve impulses from one nerve to the next.

The discovery of acetylcholine did not remain abstract knowledge, but had rapid therapeutic consequences; in the 1930s psychiatrists out of pure desperation began administering doses of it to their patients in hopes of relieving schizophrenia—even though they had no notion of how its mechanism worked. The results were indifferent, but many doctors remained optimistic, and in 1952, their confidence was justified with the discovery of a powerful new anti-psychotic drug called chlorpromazine, an antihistamine not related to acetylcholine. That same year a group of Paris physicians, including the anesthetist Henri Laborit and the psychiatrists Jean Delay and Pierre Deniker, learned that agitated psychotic patients, locked in the grips of their voices and delusions, became vastly better once they began taking the new drug. It had been developed by a French drug house and was marketed in the United States as Thorazine once the American pharmaceutical firm Smith Kline & French (as it then was) had bought the rights.

The impact of chlorpromazine on psychotic patients was sensational. At the time, Donald Klein was in charge of a ward for World War I veterans at a Public Health Service hospital in Lexington, Kentucky, where, he says, “these people had been psychotic for thirty years. They were out of it completely.” After he began giving chlorpromazine, he recalls, “a guy who hadn’t said anything for thirty years comes over to me after a few weeks and said, ‘Doc, when am I getting out of here?’ It was Rip Van Winkle. He had remembered nothing. The last thing he remembered was, in 1916, going over the trenches. That was an honest-to-God miracle.”

The miracle drug had major disadvantages. It tended to produce involuntary movements, in their chronic form called tardive dyskinesia, which many patients found embarrassing, often causing them to stop taking the drug after they were discharged from the institution; nobody knew how it worked (on the other hand, nobody knew how aspirin worked either); and there were reservations about treating what were seen as the symptoms of mental distress rather than its causes. Yet chlorpromazine’s obvious effectiveness gave it the same persuasive punch as penicillin.

By 1956, two years after the American launch of chlorpromazine, Life was headlining NEW AVENUES INTO SICK MINDS. The story compared noise levels in a disturbed ward before and after the introduction of chlorpromazine. Before, the ward had been 220 times louder. With chlorpromazine began the takeoff of psychopharmacology, the branch of biological psychiatry that deals with medication. Chlorpromazine’s success initiated the search for other kinds of psychiatric drugs and for insight into the mechanisms of their action in the brain.

Thorazine’s obvious effectiveness gave it the same persuasive punch as penicillin, and initiated the search for other kinds of psychiatric drugs.

Chlorpromazine did not represent the first use of drugs in psychiatry. The French novelist Marcel Proust, for example, was dependent upon barbiturates, the heavy-duty sedatives that preceded Valium. Virginia Woolf often took Veronal, a brand-name barbiturate. In asylums powerful laxatives like castor oil had once been commonly given for schizophrenia. But, with chlorpromazine, one had a drug that made a difference.

Psychoanalysts were, in the main, hostile to the discovery. They struggled to find psychoanalytic explanations for chlorpromazine’s success (”protects patients from overwhelming sexual and aggressive impulses,” went one), and the principal American psychiatric journals, dominated by analysts, refused to accept articles on psychopharmacology.

This was in the mid-1950s. At the time, there was a second highly successful treatment for mental illness that later would be buried under adverse publicity. Even today many readers will react negatively when they hear about electroshock, or electroconvulsive therapy (ECT). First developed in 1938, ECT had by the mid-1950s been refined to the point at which it was accepted as a normal therapy in the treatment of depression and psychosis. Why shocking the brain to the extent of eliciting convulsions makes psychotic patients better remains unclear. Yet empirically the therapy has proved itself. ECT is, in fact, at present the single most effective treatment for major depression, particularly in cases where drug therapy has failed or is inadvisable. It has prevented many severely depressed people from committing suicide in the depths of their despair.

However, in the 1960s, a generation in revolt against technological solutions to problems they thought should be handled by kindness and community action decided that ECT brutalized and dehumanized patients. Ken Kesey, a creative-writing student who had been an orderly at a psychiatric hospital in California, produced in 1962 the acetylene flame with which the sixties Zeitgeist burned away ECT: the novel One Flew Over the Cuckoo’s Nest. Kesey had acquired the idea that ECT “fried” the brain. When, in 1975, Milos Forman made the book into a movie, it became United Artists’ biggest-ever hit at the time and helped persuade an entire generation that ECT belonged in the chamber of psychiatric horrors, alongside lobotomy. For the next decade or so, ECT virtually vanished from the armamentarium of useful psychiatric therapies.

Thanks to the work of a small group led by Max Fink, of the State University of New York at Stony Brook, electroshock has recently returned to favor. Its rehabilitation is testimony to the power of science in medicine: The ECT advocates were able to show that it really did bring people back from depression, and rapidly too, without the three-week wait often required for antidepressants to kick in and with very few side effects, the much-trumpeted risk of permanent memory loss having been greatly exaggerated.

Despite the loss of ECT in the 1960s, the revolution in psychiatry continued on, almost entirely drug-oriented. Chlorpromazine opened the way for a cornucopia of medications that could help major psychiatric illnesses. Of course, all that spilled from the cornucopia was not bounty. Some were me-too drugs, thrown onto the market only for competitive reasons; others were identified as toxic and soon withdrawn; still others lapped from psychiatry into the streets as drugs of abuse. Yet on the whole these medications greatly ameliorated the lot of people with mental illness.

But what of those millions and millions who suffered lesser but still painful woes: depression, anxiety, agoraphobia? The story of what one of its most effective proselytizers—Peter Kramer of Brown University, who championed the drug Prozac—called “cosmetic pharmacology” began with a Czech-born Jewish refugee from Hitler named Frank Berger. Although trained as a bacteriologist, Berger wrote, “I was interested in the neuropharmacological basis of mental disturbances. Most people get nervous and irritable for no good reason. They flare up, do not differentiate between serious problems and inconsequential ones, and somehow manage to get excited needlessly. These people are not insane; they are simply overexcitable and irritable, and create crisis situations over things that are unimportant.”

Consulting for a drug house named Carter Products, whose chief previous claim to fame had been Carter’s Little Liver Pills, Berger, a gifted scientist and medical thinker, produced a drug, later given the generic name meprobamate, that reduced quotidian anxiety. Carter lost interest in the product after asking a sample of physicians if they wanted such a thing.

 

Eventually, it made its way to the market under the name Miltown and generated a greater demand than any drug previously marketed in the United States. In 1960, Hoffmann-La Roche, a Swiss pharmaceutical firm with a large American arm, brought out Librium (chlordiazepoxide), the first of the antianxiety drugs called benzodiazepines. Valium (diazepam) followed three years later, in 1963.

These drugs began to change the pattern of psychiatry. For the first time, psychiatrists had something to offer their patients other than hour after hour of talk therapy, something that would make an immediate difference in how they felt without sedating them the way barbiturates had done. By 1975, a quarter of all office visits to a psychiatrist were ending with a prescription. There was, however, one problem: The benzodiazepines turned out to be addictive. In 1975, the Food and Drug Administration put Valium and the other “benzos” on its list of controlled substances.

As antianxiety drugs appeared, so did ones that fought depression. In 1950, the Swiss firm Geigy (since folded into Novartis) launched Tofranil (imipramine), the first really effective medication for depression. Scads of others followed, a so-called first wave of antidepressants. The second wave came in the early 1980s. In consultation with the Swedish neuroscientist Arvid Carlsson, the Stockholm pharmaceutical company Astra came out with a new drug that affected the chemical neurotransmitter serotonin. After a neurotransmitter has carried a nerve impulse across the synapse from one nerve cell to another, the neurotransmitter is taken back into the nerve cell that discharged it. This is called reuptake. The new drug helped serotonin to remain longer in the synapse, by selectively inhibiting its reuptake. By some mechanism not fully understood, this tends to reduce depression. Drugs using the process soon became identified by the acronym SSRI: selective serotonin reuptake inhibitor.

Astra’s new drug, Zelmid (zimelidine), was highly effective, but it turned out to cause unacceptable side effects and was withdrawn from the market before it ever reached the United States. Nonetheless, Carlsson and Astra’s work was quickly built upon. In Indianapolis, Indiana, the Eli Lilly Company had been working independently on its own SSRI and came up with a compound that, although no more effective than the first-generation antidepressants, acted more quickly, had fewer side effects, and even produced weight loss. Lilly introduced the new drug in December 1987 as Prozac (fluoxetine). It went on to become the most successful psychiatric drug in history.

What with the success of the new SSRIs in treating depression, depression itself began to change. Once it had been an unusual condition that often led to hospitalization or even suicide; by the early 1990s, the definition of depression had been greatly expanded to include a range of disorders that responded to new drugs like Prozac, and it was transformed into a common malady for which help was readily available. Today, almost half of all visits to American psychiatrists are for mood disorders, depression chief among them. Moreover, the SSRIs are so straightforward to administer that many physicians other than psychiatrists prescribe them; one-third of anti-depressants, for instance, are prescribed by family doctors.

The discovery of the anti-depressants, the antipsychotics, and the other highly effective psychiatry drugs represents an epochal scientific accomplishment. For example, lithium, for mania and chronic depression, is one of the most effective medications in psychiatry. But if, as the British psychiatrist David Healy puts it, “drugs created biological psychiatry,” then the researches those drugs fostered have now come full circle, with the news that psychotherapy too can affect brain chemistry. In a series of studies in the early 1990s, a team at the UCLA School of Medicine demonstrated that if you treat obsessive-compulsive disorder with psychotherapy, the elements of brain chemistry suggestive of psychiatric illness may disappear. So, psychotherapy can be effective. Still, psychoanalysis as a form of psychotherapy is no more effective than any of 130 other kinds of psychotherapy, and it’s a great deal slower and more expensive.

Psychotherapy has always existed in medicine, the doctor acting upon the patient’s mind in the context of a one-on-one relationship. Few examples of its efficacy are more dramatic (or extreme) than one related by the English psychiatrist Sir James Crichton-Browne, about R.W., a patient of his who had been admitted following a head wound suffered in the First World War. R.W. had “never quite recovered from the accident, was afterwards dismissed from the Service, and ultimately found his way into the asylum in a state of maniacal excitement.” He was “restless and sleepless and miserable, and became greatly emaciated and haggard.”

The real world of psychiatric illness is just as tricky and just as filled with ultimate tragedies as is the real world of cardiology.

When another of Crichton-Browne’s patients in the asylum died, R.W., who saw the body being removed, glanced a little later in the mirror and exclaimed in horror, “Good heavens! I’ve got his head on!” He told Crichton-Browne, “This is not my head, Doctor. Just look at it! This is not my nose, it is [the other patient’s]!” He began “bewailing it daily and even threatening suicide to end his trouble.”

After three weeks of this, Crichton-Browne decided to try an experiment. Passing R.W. in the hall one day, he looked surprised and asked, “When did the change take place? When did you get it back again?”

R.W. quickly explored his face with his fingers. “You don’t mean to say that—but yes, it is. Oh, Doctor, I’m so rejoiced.” He looked at himself in the mirror. “I wasn’t the least aware of it till you spoke.”

Concluded Crichton-Browne: “There is such a thing as psychotherapy.”

Indeed there is, though it almost never works that abruptly. The power of suggestion can be very potent, the more so when the suggestion comes from a figure of authority like a physician. Medical students learn that they must not frown and grunt suspiciously when listening to a patient’s heart. Doing so can suggest the patient into having chest pain.

The therapeutic power inherent in the doctor-patient relationship represents in itself a form of psychotherapy, a form no less effective than any of the psychotherapeutic systems in circulation, such as Jungian therapy, family therapy, and so on. Psychiatrists are singularly well placed to administer this kind of psychotherapy in addition to medication, for according to the National Ambulatory Care Survey, the average psychiatric consultation lasts more than forty minutes, in contrast with the average consultation in internal medicine or obstetrics, which lasts only around ten minutes. In those forty minutes the psychiatrist has ample opportunity to work on the patient’s mind, listening and counseling. There is a synergy between psychopharmacology and psychotherapy; each makes the other more effective.

Today, only 2 percent of psychiatric patients receive psychoanalytic therapy. Although many psychiatrists retain Freudian couches in their offices, they increasingly use them to pile their offprints on. Of 163 residency training programs for psychiatrists in the United States, more than 100 have abandoned instruction in intensive psychotherapy. The hallowed psychoanalytic concepts, such as inferiority complex and anal retentiveness—once so familiar in cocktail-party chitchat—have ceased to be important in the understanding of psychiatric illness. The critic Frederick Crews of the University of California at Berkeley says in his book The Memory Wars: Freud’s Legacy in Dispute, “Entities like the psychic troika of id, ego, and superego deserve to be regarded not as discoveries like radium or DNA, nor even as mistakes like ether or animal magnetism, but as pure inventions like Esperanto, Dungeons & Dragons, or closer yet, Rube Goldberg algorithms for making something happen with maximum complication.”

But we aren’t just talking about psychopharm beating psychochat. Rather, it’s a growing confidence within the profession that psychiatric illness can be handled much the way heart disease or kidney disease is, that psychiatry is a specialty of medicine, and that the function of the psychiatrist is to treat illness, not to act as a glorified social worker or literary pundit. Today more than a quarter of all psychiatric training programs offer a combined residency in internal medicine or family medicine. The psychiatry of the year 2000 is linked firmly to the rest of medicine.

What does the future hold in store? The new psychiatry offers the same therapeutic promise as did internal medicine in the early 1950s, in those days eager to attack illnesses with brand-new antibiotics such as penicillin and anti-inflammatories such as cortisone. The promise inherent in the “wonder drugs” of that era beckons again with the new “atypical” antipsychotics, such as Lilly’s Zyprexa (olanzapine)—atypical because they cause fewer of the involuntary movements that bedeviled patients with the earlier antipsychotics. The promise beckons, too, with the new SSRIs, which turn out to be effective for many psychiatric illnesses, such as panic disorder, bulimia, and obsessive-compulsive behavior, in addition to depression.

Yet there are problems. Although the new psychopharmacology sounds great in the drug ads directed at doctors, many people with schizophrenic relatives will tell what a martyr’s path they had to tread until the right antipsychotic was found. Despite the SSRIs, bleakness still visits the lives of many affected by depression. The real world of psychiatric illness is just as tricky and as filled with ultimate tragedies as is the real world of cardiology.

In addition, the new biological psychiatry still has the capacity to shoot itself in the foot. One tiger pit into which it can fall is the proliferation of new diseases. Whenever you hear about a new disease, a warning light should go on in your mind, for there really is very little new under the sun. Yet in child psychiatry, for instance, so-called attention deficit disorders have become all but ubiquitous. The classrooms in well-to-do neighborhoods are now rife with children who have received diagnoses unknown thirty years ago, such as attention deficit hyperactivity disorder (ADHD), children who every morning must take their medication, usually a Novartis product called Ritalin (methylphenidate). Generally the problem falls in the Tom Sawyer band on the spectrum of standard child behavior, affecting normal active kids (usually boys) whose teachers are tired of their antics and whose parents have accepted a diagnosis urged on them by the school social worker. A psychiatrist or pediatrician often confirms the diagnosis because it’s the simplest way of dealing with the parents’ anxiety.

 

Is there really such a disease as ADHD? Yes, in that a small number of these children do have a genetically based problem with hyperactivity. No, in the sense that the so-called disease is virtually unknown elsewhere. (Try asking a British psychiatrist what he or she thinks about ADHD.) Evidence of the culture-specific nature of the epidemic: The United States consumes 90 percent of the world’s Ritalin.

Trauma has been another trap for the new psychiatry. In the 1970s, post-traumatic stress disorder (PTSD) was first articulated by a group of Vietnam veterans, and it rapidly disseminated from the narrow world of unhappy vets to the wide world in which stress and unhappiness are as common as grass. Suddenly, the normal frictions and disappointments of life became a psychiatric illness, and PTSD counseling a growth industry. Yet even an event as stressful as the Holocaust produced no particular pattern of psychiatric illness in its victims; I follow on this point the wisdom of Herman van Praag, former chair of psychiatry at the Albert Einstein College of Medicine and himself a Holocaust survivor. It is unrealistic to claim that watching scary movies on television can produce PTSD in children.

American society seems to crave psychiatric diagnoses. The positive side of this is that psychiatric illness is being destigmatized; the bad news is that the normal heartburn of living is being medicalized. When the burden of self-consciousness is pathologized, psychiatrists are called on to do things for which they are not actually trained. In psychiatry school you aren’t taught to help people deal with unhappiness, which tends to flow from the structure of life; you’re taught to look for the side effects of medication and prevent the patient’s lithium from interacting with his cardiology drugs. You’re trained to deal with such psychiatric symptoms as depression, anxiety, obsessive-compulsive behavior, and, in the overmedicated elderly, delirium. And with the new biological psychiatry, you learn to perform these tasks very well.

But American society will be disappointed if it calls upon psychiatry to deal with children who are out of control, adults who cannot keep relationships together, and drivers who experience “road rage” when somebody tailgates them. It is psychiatry’s alliance with the neurosciences, a triumphant revival, that has given it the ability to deal effectively with real psychiatric illness. It is owing to the long hegemony of psychoanalysis that we somehow expect from psychiatry much more.

Enjoy our work? Help us keep going.

Now in its 75th year, American Heritage relies on contributions from readers like you to survive. You can support this magazine of trusted historical writing and the volunteers that sustain it by donating today.

Donate