Scatterlings of the Metrocene: Evolution, Education and the Dawn of the Cyberhuman Epoch

Samuel R. Smith, University of Colorado
Jim Booth, Surry Community College

She held out her hands, palms up, the fingers slightly spread, and with a barely audible click, ten double-edged, four centimeter scalpel blades slid from their housings beneath the  burgundy nails. She smiled. The blades slowly withdrew.
– William Gibson, Neuromancer (1984)

Pat Diener…is 26 years old, and she is going deaf. Landing her in the annals of science are the microscopic electrodes that doctors have buried deep inside her brain. Two fine platinum wires – as thin as a human hair and insulated in teflon – run underneath the young woman’s skull, connecting the electrical circuitry inside her head to a black plastic plug that sticks out from behind her left ear. From there, Diener can wire herself into a pocket-sized “speech processor” that picks up sound and transmits it to the electrodes, enabling the brain to interpret it.
– Associated Press Wire Report, 12/2/92

The technological explosion of the last few decades has made workaday fact of once-wild science fictions like genetic engineering, space travel, laser surgery and computer-generated animation – not to mention the handy little construct used to produce this document, the IBM-compatible 386-SX personal computer.

These innovations, and countless others besides, have improved the lot of mankind immeasurably; however, while we have consecrated so much energy to the conception and construction of better technologies, distressingly little attention has been devoted to the subtle, yet profound manner in which our creations have transformed us.

Thanks to television and instantaneous global communications, thanks to the electronic data base, to the video game system, word processor, hand-held calculator, digital synthesizer, computer billboard and infonet – thanks to a boggling array of modern and post-modern amusements and conveniences, humans have evolved, perhaps more rapidly and more dramatically than at any time in our history.

The term “evolution” is used advisedly – evolution connotes large scale, systemic change, as opposed to “mutation,” which implies limited, isolated incidences of change. The following pages will detail what I believe to be our threefold evolution: first, that television, in its newfound role as socializer, has caused us to become significantly less thinking and more intuitive; second, that due to all manner of technology and mass media, we have acclimated to tremendous increases in societal noise and sensation; and third, that computerized information storage and retrieval technology has rendered obsolete the notion of brain as data repository, recasting it instead in a more practical information processing system paradigm.

As the technology curve becomes more and more vertical, the pace of evolution escalates. It is critical that we step aside for a moment of self-examination, that we look beyond the cosmetic concerns of how we live and what we do; at issue is the more essential question of what we are, and more importantly, what we are becoming with respect to technology.

The cybernetic future envisioned by the likes of Philip K. Dick and William Gibson has arrived, and earlier than most of us anticipated. Materially, we’re still in the hardware revolution’s early stages: pacemakers, artificial hearts, Bo Jackson on the verge of returning to the Major Leagues with an artificial hip, Pat Diener’s neural prosthetic implant, and, just around the corner, Emory neurophysiologist Donald Humphrey’s development of electronic devices which he hopes will enable a monkey to manipulate a robot arm simply by thinking about it.

Psychologically, however, we are well into an irreversible convergence with machinekind. The essential relationship between man and machine has changed qualitatively; where the computer was once a tool which made a scientist’s job easier, it is now an indispensable partner in the scientific and creative process. Huge portions of what researchers, doctors, engineers, artists and videographers now do would be impossible – not inconvenient, but literally impossible – without the aid of our silicon colleagues.

Of course, even the most rampant optimists admit that we’re a few years removed from the day when bionic limbs and neuroelectronic interface implants are commonplace; still, it’s safe to say that, in the societal subconscious, we have crossed over. While our bodies may be ours, for the time being, our hearts and minds have been promised. The Good Ship Terra has become, for good or ill, a cybernetic culture.

Television and the Rewiring Of the Human Brain

There’s just no arguing the import of television in modern America. According to Connoisseur magazine there are currently over 750 million TVs in this country – roughly three apiece for every man, woman and child. By one count, about 98% of American households have at least one TV, while only 96% can claim indoor plumbing. Most estimates have the average American watching television between four and six hours a day. The numbers are striking, certainly, but television’s ultimate impact goes much deeper than the quantitative concerns of simple demography.

What is more significant is the way in which television has, over the last few decades, slowly usurped the function of societal socializer. More and more, TV is the medium that establishes, shapes, and transmits societal norms and mores. It is the great homogenizer, in a sense, and Connoisseur says it is the medium through which 70% of all Americans receive most or all of their information about the world.

While Americans in general can be found glued to the tube in alarming numbers, perhaps no segment of our society is more affected than our children, many of whom are in their peak formative years. The A.C. Nielsen Company estimates that children aged six to eleven spend an average of 29 hours per week in front of the TV, while those in the critical two to five age group watch 33 hours per week. By the time these kids reach college, they will have spent roughly 11,000 hours in the classroom; in that same period of time they will have spent 22,000 hours or more watching television. Connoisseur estimates that the only activity these teens will have devoted more time to than TV is sleeping.

Okay, so kids watch too much TV and this can’t be good for them. The real problem, though, is a bit more complicated, and rests in television’s essential nature, in Marshall McLuhan’s notion of television as a “cool” medium – that it’s ultimately more involving imagistically than intellectually.

Any introductory class in communication theory will explain that message is just one part of the process. Equally important is the medium, which interprets, shapes and tones the message it transmits. There’s no reason to believe that medium is any less a factor during the socialization process – in fact, according to McLuhan’s edict that “the medium is the message,” it’s easy to argue that the content of a particular fairy tale is less important than the inherent structure of the form itself.

Traditionally, socialization has been largely accomplished through narrative media. Cultural values were transmitted through stories: fables, fairy tales, Bible and bedtime stories, all of which tend toward chronological structure, with a clearly defined beginning, middle, and end, usually leading to either a stated or implied moral of some sort. And stories are told for the express benefit of the reader (or listener).

Now, though, television has displaced these traditional media, and TV is anything but linear. There’s no continuity between shows, for starters, and over the course of TV’s history the shows themselves have grown less and less traditional in their structure. Early shows like Leave It to Beaver, The Honeymooners, and Father Knows Best were basically plays adapted for a camera. The scriptwriters were still very much storytellers, the plots tended to be quite linear, and we usually got a lesson at the end.

The camera assumed a limited perspective, that of theatergoer. It wasn’t unusual for shows like Lucy to play an entire scene – in some cases as long as five straight minutes – to a single camera angle. And in a recent interview published in our local paper, Andy Griffith credits a dedication to the single-camera format for much of The Andy Griffith Show‘s overwhelming success.

Contemporary television, however, employs a far more omniscient camera. By relying on multiple angles and frequent cuts, directors accomplish an effect similar to what would happen if theatergoers were allowed to roam freely around the stage, from room to room, even backstage. This would appear to mitigate the interactive element inherent in the traditional storyteller-reader paradigm, since the story no longer plays to the viewer; the viewer is reduced to simple voyeur.

Contemporary scriptwriters also do their part, making ample use what I like to call the volleyball method – bump, set, spike – where one cast member or another is set up for a punchline kill every ten seconds or so. This, of course, comes at the expense of narrative continuity.

In some cases the whole pretense of storyline is abandoned. The worst offender is probably the most popular sitcom of the last decade, The Cosby Show. Traditional plot was never a strength of the show, and many episodes are aimless meanders from one disconnected and usually pointless vignette to another. The events of the first five minutes in no way predict the last five minutes, and it’s often difficult to describe Cosby in terms of “the episode where _________.” Rarely is an episode about any one thing in particular.

Then there’s MTV, for the last twelve years television’s cutting edge – technically, creatively, and commercially. With MTV’s relentless sound and image assault, the viewer is basically treated to a new show every three or four minutes, instead of the usual half-hour or hour.

And the narrative-to-imagistic evolution of TV’s first forty-plus years is evident, in microcosm, in MTV’s first decade. Early MTV videos were often quite narrative, with the director either retelling the story of the song or, in cases of purely lyrical pieces, using video to create a narrative context for the song.

Now, though, rampant imagistic discontinuity is the norm – and any narrative element present is often mitigated by splice-a-second intercutting techniques. On shows like “120 Minutes” it isn’t unusual to find videos intercutting at a rate of 150 times a minute – and the images themselves often have little or nothing to do with the song or each other, at least not in any linear sense.

The result? Well, TV the socializer is a very different beast from its predecessors, and it isn’t unreasonable to expect its children to be fundamentally unlike those suckled by more traditional media.

Our initial clue that something was amiss came during my (Sam Smith’s) first year teaching freshman composition at Iowa State University. I was stunned at the sheer poverty of organizational skills displayed by my otherwise bright young students. In short order I learned that my experiences matched those of many of my colleagues; further, I discovered that older teachers – some of whom had been involved in the educational process for twenty or more years – perceived a decided slide in the organizational skills of their charges over that period of time. It seemed to them that the problem was getting consistently worse.

One possible explanation lies in the “Death of Gestalt” argument which we advanced at the Wyoming Conference on English in 1990. Psychologist Max Wertheimer contended that:

    Thinking consists in envisaging, realizing structural features and structural requirements; proceeding in accordance with, and determined by, these requirements; thereby changing the situation in the direction of structural improvements… (from Productive Thinking, 1945)

He ultimately concludes that thinking involves “looking for structural rather than piecemeal truth.” If we understand Gestalt theory correctly (admittedly we’re no experts in this area), this longing after structure is regarded as an inherent human trait.

But, with the ascendancy of TV as socializer and anecdotal evidence of a corresponding decline in organizational skills in people exposed to higher levels of television at an earlier age, it may be time we asked ourselves if the Gestalt notion of structural thinking is programmed rather than inborn. And if these processes are learned – socialized – then how do we confront the fact that young people, reared on television, are essentially unlike pre-TV generations? What are we to do with the notion that, in a momentous way, humanity has evolved a fundamentally new fashion of perceiving and processing information? What if Gestalt is dead?

Recent studies from Jonathan Schooler and his colleagues at the University of Pittsburgh and the University of Virginia indicate that, at least among college-aged kids, traditional thinking and analysis leads to poorer and less satisfying decision-making. These results, so far, seem intuitively consistent with the idea at hand.

In one study, students were asked to select one of five art posters to take home. The options included pictures of animals and impressionist paintings. Half of the students were first asked to analyze the reasons for their choice, and to write them down. According to a summary in The Washington Post, when called back three weeks later the students who had been asked to analyze their choices – the “thinkers” – “were far less happy with their posters than those who chose without articulating their reasons. They wished they had chosen differently.”

Another experiment asked students to choose between two jams. The non-thinking group’s choices pretty much matched those made by taste-test experts, but the group asked to give reasons for their choices made decisions which varied wildly, both from those of the experts and each other’s.

In a third study, two groups watched videotape of a bank robbery. When asked afterward to identify the robber in a police lineup, the “intuitive” group picked the right man 65% of the time. The other group, however, was asked to provide a detailed description of the robber immediately after viewing the video. When finally faced with the same police lineup, this group only identified the perpetrator correctly 35 to 40% of the time.

Does this evidence lend support for the notion that TV has somehow spurred an evolution in the human thinking process? According to Schooler, the studies so far have involved only college students. Further, none of his methodology addressed the TV connection theorized here. When we’re able to devise reliable, quantifiable methods for measuring narrative versus imagistic tendencies in television and non-television groups – which will be difficult for a variety of cultural and age-related factors – we will be better able to answer definitively. But despite the methodological quagmire which this research will entail, we believe it’s possible; moreover, we believe it’s imperative.

Access, Excess, Onslaught: Coping with Stimulus Overload

Television also figures prominently in the proliferation of social noise and information levels during the last half of the 20th Century. That stimulus levels have increased dramatically is obvious enough; less evident, apparently, is any coherent understanding of the dangers such media as TV, radio, MTV, computers and video games pose for traditional concepts of literacy.

Psychologists use a concept called “stimulus threshold” to refer to the level at which any stimulus – sound, light, touch – becomes noticeable to the subject. A sound so quiet that a person cannot detect it, for example, is said to fall below that person’s stimulus threshold.

Also of interest are the related questions of how and how well organisms acclimate to new conditions – how did humans adapt to the end of the ice age 10,000 years ago, for example, or how do rats cope with the introduction of electrical current to the floor of their cage?

Humans, in particular, have demonstrated the ability to acclimate to just about anything – from life in the sub-zero climes of Canada and Scandinavia to the harsh desert conditions endured by Bedouin tribesmen, from the high-intensity environment of a Wall Street brokerage house to the supreme tranquility of a Franciscan monastery. Our adaptation to noise and info increases over the last several years is no different, as we have repeatedly assimilated technological and social innovations that would undoubtedly have landed our ancestors in the nearest padded lockup.

But while we’ve adapted, the body of traditional culture often has not. With the exception of books that get made into movies, access to our literary ancestors remains pretty much what it has always been. As a result, a number of things humans once noticed, perceived and related to are now sub-threshold, lost in the white noise. By way of analogy, consider the fate of a whisper at a Def Leppard concert.

Text is the cornerstone of so much traditional culture, and up until recently this was fine. Societal noise levels were more or less the same as they had been for ages – writing and painting were dominant media in 1900, just as they had been in 1400. Styles changed, but the media remained the same. For the sake of discussion, we’ll call this pre-tech noise level x.

In 1993, however, static textual and visual art forms must compete with animated art forms like movies, TV, and music video. The typical American teenager lives at a much higher stimulus level – let’s say 10x. If you don’t believe it, read a few selections from whatever collected poetry volume you have at hand, then spend two hours watching MTV – and make sure you have the volume cranked up to normal teenager levels.

Basic psychology would say that a person acclimated to 10x will likely not even be aware of stimulus level x. Whether the ideas advanced here ultimately prove correct or not, the theory does more to explain the difficulties faced by teachers across the country than anything else I’ve encountered. How can you possibly be expected to make “Ozymandius” interesting to a child of the MTV age, one who’s seen Living Colour’s video for “The Cult of Personality,” a work covering very similar thematic ground?

Building the 21st Century Cyborg

Perhaps most interesting of all is how this notion of man-machine evolution answers those who say Johnny can’t read or spell, Jane can’t perform long division, and Jimmy can’t find Mexico on a map; Danny thinks Latin Americans speak Latin, and Jill thinks the phrase “from each according to his ability, to each according to his need” comes from the Declaration of Independence.

From a traditional perspective, we simply don’t know all the things we’re supposed to know. A number of writers and researchers have argued, quite persuasively, that American students are impoverished in basic geography, history, literature, and math skills.

However, while Jane can’t perform long division, she is pretty handy with a calculator. Maybe Johnny can’t spell, but his word processor, like mine, has a built-in spell-checker. And while Danny is probably beyond hope, Jimmy knows exactly where to go to find out all he needs to know about Mexico – especially if his computer is on-line with an interactive infonet like The Source or CompuServe.

Many elementary and secondary schools are getting on the computer- assisted learning bandwagon; it’s probably safe to say that most or all classrooms will be computerized within the next ten or twenty years. And while it’s great that educators are getting more comfortable with the computer, our educational establishment is a long way from understanding the full implications of computers in our future.

It may be useful to employ a computer analogy to explain the evolution of the role of education and the human brain. Many educators essentially see the brain as hard drive – as bit/byte storage, a repository of hard data. Given the sheer, mind-numbing quantity of information in our society, however, the “hard-drive” model is impractical and unfair. There is simply no way to know all the things that we “should” know anymore.

Far more productive is a model which regards the brain as CPU – as information processing system. One of the chief goals of an educational system is to provide students with quick, lasting access to usable information. If all of the necessary information is stored in a particular data base, and a student knows how to access and manipulate that data base, then hasn’t this goal been achieved? Does it matter, ultimately, whether the data base is internal and organic or external and silicon?

The fact is we’re not far from universal access to the sum of recorded knowledge – personal computers get more affordable every day, and two-way (interactive) cable, which will link nearly every household in the country into a global info and service network, is just around the corner. At that point, all information not classified by government or corporation (assuming there’s still a difference) will be readily available to anyone who knows how to use a computer. Even the classified stuff will be available to those who are really good at defeating security software.

In the near future, the term “literacy” could well come to mean roughly the same thing the term “computer-literacy” means today, because software will have completely replaced text as the ascendant mode of informational transaction. For the time being, reading and writing will remain necessary skills because they enable interface with the computer.

At some point, however, voice recognition technology could render traditional literacy unnecessary. And when the full effect of work by neurophysiological scientists like Donald Humphrey (mentioned above) is realized, biohardware implants could allow us to interface directly with our computers – “jack in,” in other words – and conduct data transactions within the sort of cyberspace network envisioned by science fiction writers like William Gibson and Bruce Sterling.

Traditional literacy could become irrelevant, an anachronism, a quaint hobby nurtured by “Modernists,” (a term perhaps used much like we use “Medievalist”). All of a sudden, competency with “ancient” forms like writing would bear the hyphenated form – “text-literacy.”

None of this is to say that we should abandon all of our traditional notions about education. While access to usable information is a primary goal for educators, it’s also essential that the institution impart knowledge of how to use that information – in other words, information doesn’t equal thought. And how to think is perhaps the most critical of all educational functions, especially since this is the very area under attack by television.

Next Wave

A cursory glance at the Geologic Timetable in Webster’s Dictionary reveals that major evolutionary and anthropological events often parallel significant geological shifts. The first evidence of humanity, for example, roughly coincides with the onset of the Quaternary Period some two million years ago. A Wake Forest University Anthropology professor we consulted recently pointed out certain major changes in human living patterns at the beginning of the Holocene Epoch – the “recent,” or post-glacial period.

The next epoch, she said, would be denoted by some significant geological or environmental shift. Hmmm. Well, between the ozone layer, the greenhouse effect, clear-cutting in the Amazonian Rain Forest, and the effects of acid rain, it isn’t hard to imagine that we’re on the verge of something ominous. In fact, in his book The End of Nature, William McKibben concludes that we have passed the point of no return – we have effected irreversible damage to our ecology. We have slain nature and spawned a mutant ecosystem to take its place.

It isn’t at all unreasonable to wonder whether we are in the midst of what geologists 10,000 years from now might see as the transition from Holocene to whatever comes next. The difference between the dawn of this epoch and all others before it, though, is that this time it will be engineered. The environmental changes which loom now are the exclusive product of human technology.

The long-range projections of urban planners up and down the seaboard as well as the worst fears of terrified 20th Century environmentalists are realized in the novels of William Gibson, in his description of the Sprawl – BAMA, the Boston-Atlanta Metropolitan Axis – a hundred years from now: one solid, uninterrupted megalopolis, occasionally magnificent and monolithic, but more frequently run-down and degenerate, an ecological disaster zone with huge portions encased in Fuller domes. So, in honor of Fritz Lang and the inspired prophesy of his 1926 classic Metropolis, let’s call this new epoch the Metrocene. And the first age of man in this epoch? How about Cyberlithic?

What role will we – scholars, writers, educators – play in the early stages of this new age? Hopefully we will lead, shaping the direction of society, because resting on the reactive, rear-guard methods of the past will certainly doom the collected wisdom of traditional culture – of which we are the keepers, just as certainly as small groups of cloistered monks were the keepers of the flame during the dark ages. But in the sort of high-tech, rapidly-changing world we’re talking about, monk-like obsolescence will guarantee cultural disenfranchisement.

In short, we must participate in the future if we are to have a place in it.


Presented to the Wyoming Conference on English, June 1993
Copyright 1993 by Samuel R. Smith & Jim Booth

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s