Skip to main content

The Rise And Decline of the Teenager

December 2024
20min read

The word emerged during the Depression to define a new kind of American adolescence, one that prevailed for half a century and may now be ending.

When the anthropologist Margaret Mead journeyed to the South Pacific in 1926, she was looking for something that experts of the time thought didn’t exist: untroubled adolescence.

 

Adolescence, psychologists and educators believed, was inevitably a period of storm and stress. It debilitated young men and women. It made their actions unpredictable, their characters flighty and undependable. And if people who had lived through their teens didn’t remember being that unhappy, some said, it was because it had been so traumatic that their conscious minds had suppressed what really happened.

At the age of twenty-five, Mead, who wasn’t all that far beyond adolescence herself, simply couldn’t believe that this picture of life’s second decade expressed a necessary or universal truth. If she could find a place where social and sexual maturity could be attained without a struggle, where adolescence was so peaceful it scarcely seemed to exist, her point would be made. So she went to Samoa.

There are few places left on Earth remote enough to give a contemporary observer real perspective on how Americans think about their young people. The teenager, with all the ideas about adolescence that the word encodes, is one of our most potent cultural exports. All around the world, satellites beam down MTV with its messages of consumption, self-indulgence, alienation, angst, and hedonism. The American invention of youth culture has become thoroughly international; it causes consternation and sells products everywhere.

Still, although it is extremely difficult to travel far enough across the earth to escape our culture’s ideas about teenagers, one can travel in time. Youth has a history, and since the European colonization of North America, the second decade of life has offered a tremendous diversity of expectations and experiences. They haven’t all been good experiences; most were backbreaking, some horrifying. One needn’t be nostalgic for those lost forms of youth in order to learn from them. Nobody wants to send young people off to the coal mines, as was done a century ago, or rent them out to neighboring households as servants, as seventeenth-century New Englanders did. Nevertheless, history can be our Samoa, a window into very different ways of thinking and behaving that can throw our own attitudes into sharp relief and highlight assumptions that we don’t even know we’re making.

Like Mead, who freely admitted that her research in Samoa was shaped by what she viewed as a problem in the American culture of her own time, I have set out on historical explorations spurred by a suspicion that something is deeply wrong with the way we think about youth. Many members of my generation, the baby boomers, have moved seamlessly from blaming our parents for the ills of society to blaming our children. Teenage villains, unwed mothers, new smokers, reckless drivers, and criminal predators are familiar figures in the media, even when the problems they represent are more common among other age groups. Cities and suburbs enact curfews and other laws that only young people need obey, while Congress and state legislatures find new ways to punish young offenders as adults.

The way we think about teenagers is most contradictory. We assume that they should be somehow protected from the world of work, yet many high school students work as much as twenty hours a week. Teenagers form the core of our low-wage retail and restaurant work force, the local equivalent of the even lower-wage overseas manufacturing work force that makes the footwear and other items teens covet.

After 1933, virtually all young people were thrown out of work, and for the first time a majority of high-school-age Americans were actually enrolled.

Yet at the same time as our economy depends on the young, we tend to view teenagers as less than trustworthy. This is a hangover from the attitudes Mead was trying to fight, though nowadays we’re likely to ascribe young people’s perceived quirks to “raging hormones.” Most adults seem to view this conflicted, contradictory figure of the teenager as inevitable, part of the growth of a human being. Yet many people now living came of age before there was anything called a teenager. This creature is a mid-20th century phenomenon. And almost everything has changed since the early 1940s, when it emerged. Are teenagers still necessary?

The word "teenager" initially saw print in 1941. It isn’t known who thought up the word; its appearance, in an article in Popular Science, was not likely its first use. People had been speaking of someone in his or her teens for centuries, but that was a description of an individual. To speak of someone as a teenager is to make that person a member of a very large group, one defined only by age but presumed to have a lot in common. The word arose when it did because it described something new.

The teenager was a product of the Great Depression. Like other massive projects of the New Deal—the Hoover Dam, the TVA—it represented an immense channeling and redirection of energy. Unlike such public works, however, it was a more or less inadvertent invention. It happened in several steps.

First came the country’s general economic collapse and a dramatic disappearance of jobs. As in previous panics and depressions, young people were among those thrown out of work. What was different was that, after 1933, when Franklin D. Roosevelt took office, virtually all young people were thrown out of work, as part of a public policy to reserve jobs for men trying to support families. Businesses could actually be fined if they kept childless young people on their payrolls. Also, for the first two years of the Depression, the Roosevelt administration essentially ignored the needs of the youths it had turned out of work, except in the effort of the Civilian Conservation Corps (CCC), which was aimed at men in their late teens and early twenties.

There was, however, one very old and established institution available to young people who wanted to do something with their time and energy: high school. The first public high school had opened in Boston in 1821, but secondary education was very slow to win acceptance among working-class families that counted on their children’s incomes for survival. Not until 112 years after that first school opened were a majority of high-school-age Americans actually enrolled.

The depression was the worst possible time for high school to catch on. The American public education system was, then as now, supported primarily by local real estate taxes; these had plummeted along with real estate values. Schools were laying off teachers even as they enrolled unprecedented numbers of students. They were ill equipped to deal with their new, diverse clientele.

For many of these new students, high school was a stopgap, something one did to weather a bad time. But by 1940 an overwhelming majority of young people were enrolled, and perhaps more important, there was a new expectation that nearly everyone would go, and even graduate.

This change in standards was a radical departure in the way society imagined itself. Before the Depression finishing high school was a clear mark that a youth, particularly a male, belonged to the middle class or above. Dropping out in the first or second year indicated membership in the working class. Once a large majority started going to high school, all of them, regardless of their economic or social status, began to be seen as members of a single group. The word teenager appeared precisely at the moment that it seemed to be needed.

 
 

Not long before, many young people in their mid-teens had been considered virtually grown up. Now that they were students rather than workers, they came to seem younger than before. During the 1920s “youth” in the movies had meant sexually mature figures, such as Joan Crawford, whom F. Scott Fitzgerald himself called the definitive flapper. Late in the 1930s a new kind of youth emerged in the movies, personified above all by the bizarre boy-man Mickey Rooney and the Andy Hardy movies he began to make in 1937. His frequent co-star Judy Garland was part of the phenomenon too. As Dorothy, in The Wizard of Oz, Garland was clearly a woman, not the girl everyone pretended she was. The tension between the maturity she feels and the childishness others see in her helps make the film more than a children’s fantasy. It is an early, piquant expression of the predicament of the teenager.

During the 1920s, “youth” meant sexually mature figures like Joan Crawford. By the late 1930s, it meant Mickey Rooney and Judy Garland.

Another less profound but amazingly enduring model for the emerging idea of the teenager was that perennial high schooler Archie, who first appeared in a comic book in 1941. He was drawn by Bob Montana, a teenager himself, who was working for a living as a staff artist at a comic book company. For the last half-century Archie, Jughead, Betty, Veronica, and their circle have appealed more to youngsters aspiring to become teenagers than to teenagers themselves.

Nevertheless, the early popularity of characters like Andy Hardy and Archie indicated that the view of high school students as essentially juvenile was catching on. A far stronger signal came when the draft was revived, shortly before the United States entered World War II. Although married men with families were eligible for induction, in many cases up to the age of forty, high school students were automatically deferred. Young men of seventeen, sixteen, and younger had been soldiers in all of America’s previous wars and, more than likely, in every war that had ever been fought. By 1941, they had come to seem too young.

Having identified the teenager as a Frankenstein monster formed in the thirties by high school, Mickey Rooney movies, child psychology, mass manufacturing, and the New Deal, I might well have traced the story through bobbysoxers, drive-in movies, Holden Caulfield, Elvis, the civil rights martyr Emmett Till, Top-40 radio, Gidget, the Mustang, heavy metal, Nirvana. Instead I found myself drawn farther into the past. While the teenager was a new thing in 1940, it nevertheless was an idea with deep roots in our culture.

At the very dawn of English settlement in North America, Puritan elders were declaring that they had come to this savage continent for the sake of their children, who did not seem sufficiently grateful. (Like latter-day suburbanites, they had made the move for the sake of the kids.) They were also shocked by the sheer size of their children. Better nutrition caused Americans of European background to reach physical and sexual maturity sooner than their parents had and to grow larger than their parents. No wonder some early settlers fretted that their children were different from them and at risk of going native.

By the middle of the eighteenth century, there was a whole literature of complaint against both apprentices who affected expensive and exotic costumes and licentious young people given to nighttime “frolicks.” Jonathan Edwards gave one of the most vivid descriptions of moral decline and then proceeded to deal with it by mobilizing youthful enthusiasm within the church. By the time of the American Revolution, half the population was under sixteen. Young women over eighteen were hard to marry off, as one upper-class observer noted, because their teeth were starting to rot. (Seemingly unrelated issues like dental hygiene have always played an unsung role in the way we define the ages of man and woman.)

Yet as youthful as the American population was, young people stood in the mainstream of social and economic life. They were not the discrete group that today’s teenagers are. “In America,” wrote Alexis de Tocqueville in 1835, “there is in truth no adolescence. At the close of boyhood, he is a man and begins to trace out his own path.”

Things were beginning to change, however. High school, the institution that would eventually define the teenager, had already been invented. By the second quarter of the nineteenth century, it was becoming clear that rapid changes in manufacturing, transport, and marketing meant that the children of merchants, skilled artisans, and professionals would live in a very different world from that of their parents. Adults could no longer rely on passing on their businesses or imparting their skills to their children, who would probably need formal schooling. Increasingly, prosperous Americans were having fewer children and investing more in their education.

At the time, most secondary schooling took place in privately operated academies. These varied widely in nature and quality, and for the most part students went to them only when they had both a need and the time. These schools didn’t have fixed curricula, and students and teachers were constantly coming and going, since being a student was not yet a primary job. Students most often stayed at boardinghouses near the academies; they rarely lived at home.

The tax-supported high school, which, by the 1860s, had displaced the private academy, was based on a different set of assumptions. Attendance at it was a full-time activity, in which the student adjusted to the school’s schedule, not vice versa. Whereas academies had been the product of a society in which most economic activity happened in the home, high school evolved in tandem with the ideal of the bourgeois home, protected from the world of work and presided over by a mother who was also the primary moral teacher. High school students, by definition, led privileged, sheltered lives.

Most academies had enrolled only males, but nearly all high schools were from the outset coeducational. There was some public consternation over mixing the sexes at so volatile an age, but most cities decided that providing separate schools was too costly. High schools were acceptable places to send one’s daughter because they were close to home. Moreover, their graduates were qualified to teach elementary school, a major employment opportunity for young women. The result was that females constituted a majority of the high school population. Moreover, male graduates were likely to be upper class, since they included only those who didn’t have to drop out to work, while female graduates represented a wider social range.

Some of the early high schools were conceived as more practical and accessible alternatives to college. In a relatively short time, however, high school curricula became dominated by Latin and algebra, the courses required by the most selective colleges. Parents looked to win advantage for their children, so a “good” high school became one whose students went on to top colleges.

The earliest high schools treated their students almost as adults and allowed them to make decisions about their social lives. Students organized their own extracurricular activities and played on athletic teams with older men and workers. Toward the end of the nineteenth century, however, high schools increasingly sought to protect their charges from the dangers of the larger world. They organized dances so that their students wouldn’t go to dance halls. They organized sports so that students would compete with others their own age. They created cheerleading squads, in the hope that the presence of females would make boys play less violently. They discovered and promoted that ineffable quality “school spirit,” which was supposed to promote loyalty, patriotism, and social control. By the turn of the twentieth century, the football captain could escort the chief cheerleader to the senior prom.

This all sounds familiar, but this high school crowd still accounted for less than 10 percent of the secondary-school-age population. Nearly all the rest were working, most of them with their families on farms, but also in factories, mines, and department stores, in the “street trades” (as newspaper hawkers or delivery boys), in the home doing piecework, or even as prostitutes. If early high school students are obvious predecessors to today’s teenagers, their working contemporaries also helped create the youth culture.

One thing the working-class young shared with high school students and with today’s teenagers is that they were emissaries of the new. Parents wanted their children to be prepared for the future. Among the working class, a substantially immigrant population, newness was America itself. Throughout the nineteenth century settlement workers and journalists repeatedly observed the way immigrant parents depended on their children to teach them the way things worked in their new country. They also noted a generation gap, as parents tried to cling to traditions and values from the old country while their children learned and invented other ways to live. Parents both applauded and deplored their children’s participation in a new world. Youth became, in itself, a source of authority. When contemporary parents look to their children to fix the computer, program the VCR, or tell them what’s new in the culture, they continue a long American tradition.

For laboring purposes, one ceased to be a child no later than the age of ten. In many states, schooling was required until twelve or thirteen, but compulsory attendance laws were rarely strictly enforced. In Philadelphia in the 1880s, the standard bribe to free one’s child from schooling was twenty-five cents. This was an excellent investment, considering how dependent many families were on their children. In Fall River, Massachusetts, some mill owners hired only men who had able-bodied sons who could also work. In Scranton, Pennsylvania, children’s incomes usually added up to more than their fathers’.

The working teenager is, of course, hardly extinct. American high school students are far more likely to have part-time jobs than are their counterparts in other developed countries, and their work hours are on average substantially longer. The difference is that families don’t often depend on their wages for their livelihood. Teenagers today spend most of what they earn on their own cars, clothing, and amusement. Indeed, they largely carry such industries as music, film, and footwear, in which the United States is a world leader. Their economic might sustains the powerful youth culture that so many find threatening, violent, and crude.

We can see the origins of this youth culture and of its ability to horrify in the young urban workers of the late nineteenth century. Young people, especially the rootless entrepreneurs of the street trades, were among the chief patrons of cheap theaters featuring music and melodrama that sprang up by the hundreds in the largest cities. (In Horatio Alger’s hugely popular novels, the first stage of the hero’s reform is often the decision to stay away from the theater and use the admission price to open a savings account.) They also helped support public dance halls, which promoted wild new forms of dancing and, many thought, easy virtue.

Adults are perennially shocked by the sexuality and the physical vitality of the young. There is nevertheless a real difference between the surprise and fear parents feel when they see their babies grow strong and independent and the mistrust of young people as a class. One is timeless. The other dates from 1904 and the publication of G. Stanley Hall’s fourteen-hundred-page Adolescence: Its Psychology and Its Relations to Physiology, Anthropology, Sociology, Sex, Crime, Religion and Education.

The 25-year period following World War II was the classic era of the teenager. At the same time, though, teenagers were provoking a lot of anxiety.
 

With this book, Hall, a psychologist and the president of Clark University, invented the field of adolescent psychology. He defined adolescence as a universal, unavoidable, and extremely precarious stage of human development. He asserted that behavior that would indicate insanity in an adult should be considered normal in an adolescent. (This has long since been proved untrue, but it is still widely believed.) He provided a basis for dealing with adolescents as neither children nor adults but as distinctive, beautiful, dangerous creatures. That people in their teens should be considered separately from others, which seems obvious to us today, was Hall’s boldest, most original, and most influential idea.

The physical and sexual development of young people was not, he argued, evidence of maturity. Their body changes were merely armaments in a struggle to achieve a higher state of being. “Youth awakes to a new world,” he wrote, “and understands neither it nor himself.” People in their teens were, he thought, recapitulating the stage of human evolution in which people ceased to be savages and became civilized. He worried that young people were growing up too quickly, and he blamed it on “our urbanized hothouse life that tends to ripen everything before its time.” He believed it was necessary to fight this growing precocity by giving young people the time, space, and guidance to help them weather the tumult and pain of adolescence.

 

It is hard to believe that a book so unreadable could be so influential, but the size and comprehensiveness of Hall’s discussion of adolescents lent weight and authority to other social movements whose common aim was to treat people in their teens differently from adults and children. Among the book’s supporters were secondary school educators who found in Hall’s writing a justification for their new enthusiasm about moving beyond academic training to shape the whole person. They also found in it a justification for raising the age for ending compulsory school attendance.

Hall’s book coincided as well with the rise of the juvenile-court movement, whose goal was to treat youth crime as a problem of personal development rather than as a transgression against society. This view encouraged legislatures and city councils to enact laws creating curfews and other “status offenses”—acts affecting only young people. (A decade earlier women’s organizations had successfully campaigned to raise the age of consent for sex in most states, which greatly increased the number of statutory-rape prosecutions.)

Hall’s findings also gave ammunition to advocates of child labor laws. Their campaigns were for the most part unsuccessful, but employment of children and teens dropped during the first two decades of the twentieth century anyway, as machines replaced unskilled manufacturing jobs in many industries. In the years after Hall’s book came out, manufacturers increasingly spoke of workers in their teens as unreliable, irresponsible, and even disruptive. They had stopped thinking of fourteen-year-olds as young ordinary workers and begun to view them as adolescents.

Each of these movements was seen as a progressive attempt to reform American society, and their advocates certainly had their hearts in the right place. But the price for young people was a stigma of incompetence, instability, and even insanity. Adolescents couldn’t be counted on. Hall even argued that female adolescents be “put to grass” for a few years and not allowed to work or attend school until the crisis had passed.

This was the orthodoxy that Mead was trying to combat when she wrote Coming of Age in Samoa. She wanted to disprove Hall’s psychoanalytic assertion that adolescence is inherent to all human development, and replace it with the anthropological view that cultures invent the adolescence they need. Maturity, she argued, is at least as much a matter of social acceptance as it is of an individual’s physical and mental development. In Samoa, she said, adolescence was relatively untroubled, because it didn’t have to accomplish very much. The society changed little from generation to generation. Roles were more or less fixed. Young people knew from childhood what they should expect. American adolescence was more difficult because it had to achieve more, although she clearly didn’t believe it had to be quite so horrible as Hall and his followers thought.

Serious questions have been raised about some of Mead’s methods and findings in Samoa, and Hall’s theories have been thoroughly discredited. These two seminal thinkers on adolescence represented extreme views, and adolescence is of course both biological and cultural. The changes it brings are unmistakable, but countless external factors shape what it means to be a grown-up in a particular place and time. In a dynamic society like that of the United States, the nature of adolescence must inevitably shift over time.

Indeed, Mead’s research, which concentrated on young women, was a product of the sexual revolution of the 1920s, in which female sexuality was widely acknowledged for the first time. Prostitution was on the decrease, and the sexual activity of “respectable” young women was rising. In This Side of Paradise F. Scott Fitzgerald’s young Princetonians were amazed at how easy it was to be kissed. But the protagonist in the novel gives what proved to be an accurate account of what was going on. “Just as a cooling pot gives off heat,” she says, “so all through youth and adolescence we give off calories of virtue. That’s what’s called ingenuousness.”

Short skirts, bobbed hair, corset checkrooms at dances, and petting parties were seen by people at the time as symptoms of libertinism among the “flaming youth,” but when Kinsey interviewed members of this generation three decades later, he learned that the heat had been more finely calibrated than it appeared. Young women had been making their chastity last as long as they needed it to. It turned out that while 40 percent of females in their teens and 50 percent of males petted to orgasm in the 1920s—nearly twice the pre-war rate—petting was most common among those who had had the most schooling. While commentators focused on the antics of the upper classes, working-class young people, who were closer to marriage, were twice as likely to have gone beyond and had sexual intercourse.

Despite the enduring popular interest in Mead’s findings, Hall’s notion that adolescence is an inevitable crisis of the individual has, over the years, been more potent. (Perhaps it speaks more forcefully to our individualistic culture than does Mead’s emphasis on shared challenges and values.) Certainly, during the post-World War II era, when the teenager grew to be a major cultural and economic phenomenon, the psychoanalytic approach dominated. J. D. Salinger’s Holden Caulfield, literature’s most famous teenager, has an unforgettable voice and great charm, but it is difficult to read Catcher in the Rye today without feeling that Holden’s problems are not, as he hopes, a phase he’s going through, but truly pathological. While Salinger doesn’t make a judgment in the book, 1950s readers would most likely have thought Holden to be just another troubled adolescent, albeit an uncommonly interesting one.

When Hall was writing, at the turn of the twentieth century, he generalized about adolescents from a group that was still a small minority, middle-class youths whose main occupation was schooling. In all of his fourteen hundred pages, he never mentioned the large number of young people who still had to work to help support their families. Half a century later American society was more or less as Hall had described it, and just about everyone could afford to have an adolescence.

The 25-year period following the end of World War II was the classic era of the teenager. Family incomes were growing, which meant that more could be spent on each child and educational aspirations could rise. Declining industries, such as radio and the movies, both of which were threatened by television, remade themselves to appeal to the youth market. Teenage culture gave rise to rock ’n’ roll. Young people acquired automobiles of their own and invented a whole new car culture.

 

At the same time, though, teenagers were provoking a lot of anxiety. Congressional committees investigated juvenile delinquency for a decade. High schools and police forces took action against a rising wave of youth crime, a phenomenon that really didn’t exist. Moreover, there were indications that not all teenagers were happy in their presumed immaturity. Many, if not most, of the pop icons of the time, from Elvis on down, were working-class outsiders who embodied a style very different from that of the suburban teen.

And many teenagers were escaping from their status in a more substantive way, by getting married. The general prosperity meant that there were jobs available in which the high school dropout or graduate could make enough to support a family. In 1960, about half of all brides were under twenty. In 1959 teenage pregnancy reached its all-time peak, but nearly all the mothers were married.

This post-World War II era brought forth the third key thinker on American adolescence, the psychologist Erik Erikson. He assumed, like Hall, that adolescence was inherent to human development and that an identity crisis, a term he invented, was necessarily a part of it. But he also acknowledged that this identity must be found in the context of a culture and of history. He argued that not only does adolescence change over the course of history but it also is the time when individuals learn to adapt themselves to their historical moment. “The identity problem changes with the historical period,” he wrote. “That is, in fact, its job.” While earlier thinkers on adolescence had made much of youthful idealism, Erikson argued that one of the tasks of adolescence was to be fiercely realistic about one’s society and time.

 

He did not think that forging an identity in such a complex and confusing society as ours was easy for most people. He wanted adolescence to be what he termed “a psycho-social moratorium,” to allow people the time and space to get a sense of how they would deal with the world of which they would be a part. Among the results would be an occupational identity, a sense of how one would support and express oneself.

And so ideas about the nature of adolescence have shaped our image of teenagers. Reclassifying all people of secondary school age as teenagers wasn’t possible until nearly all had some period of adolescence before entering adult life. Still, teenager isn’t just another word for adolescent. Indeed, the teenager may be, as Edgar Z. Friedenberg argued in a 1959 book, a failed adolescent. Being a teenager is, he said, a false identity, meant to short-circuit the quest for a real one. By giving people superficial roles to play, advertising, the mass media, and even the schools confuse young people and leave them dissatisfied and thus open to sales pitches that promise a deepening of identity.

Whether you agree with that argument or not, it does seem evident that the challenges of adolescence have been changing rapidly in the last several decades, leaving the label “teenager” as little more than a lazy way of talking about young people. The term encompasses a contradictory grab bag of beliefs, prejudices, and expectations. It can allow us to build a wall around an age group and to assume that its members’ problems can safely be ignored.

 

The generation entering its teens today will be in sheer number, if not as a percentage of the population, the largest in our history. The people in this age group have already emerged as the most significant marketing phenomenon since the baby boom. They have spurred the opening of new teen-oriented clothing stores in malls and the launching of successful new magazines. They are helping make the Internet grow. They even have their own television network, the WB. They have their own money to spend, and they spend a lot of their families’ income too, partly because their mothers are too busy to shop.

But they do not represent any return to the teenage golden age of the 1950s and 1960s. This generation has grown up in a period of declining personal income and increasing inequality. A sizable percentage consists of the children of immigrants. Educational aspirations are very high, and no wonder: You need a college education today to make a salary equivalent to that of a high school graduate in 1970. The permanent occupational identity that was available in the post-World War II society of which Erikson wrote, one in which lifelong work for large corporations was the norm, has all but disappeared. Many see their parents still striving for the sort of stable identity Erikson thought could be resolved in youth. While it appears to be a great time to be a teenager, it seems a difficult one to be an adolescent.

Throughout history, Americans in their teens have often played highly responsible roles in their society. They have helped their families survive. They have worked with new technologies and hastened their adoption. Young people became teenagers because we had nothing better for them to do. High schools became custodial institutions for the young. We stopped expecting young people to be productive members of the society and began to think of them as gullible consumers. We denned maturity primarily in terms of being permitted adult vices, and then were surprised when teenagers drank, smoked, or had promiscuous sex.

Young people became teenagers because we had nothing better for them to do. We began seeing them not as productive, but as gullible consumers.
 

We can no longer go to Samoa to gain perspective on the shape of our lives at the dawn of the third millennium, nor can we go back in time to find a model for the future. What we learn from looking at the past is that there are many different ways in which Americans have been young. Young people and adults need to keep reinventing adolescence so that it serves us all. Sometimes what we think we know about teenagers gets in our way. But, just as there was a time, not long ago, before there were teenagers, perhaps we will live to see a day when teenagers themselves will be history.

 

Enjoy our work? Help us keep going.

Now in its 75th year, American Heritage relies on contributions from readers like you to survive. You can support this magazine of trusted historical writing and the volunteers that sustain it by donating today.

Donate