www.angelfire.com/co/1x37/cybros.html
from The Cult of Intelligence (chapter)
Cybernetics
and the Secret of Life
by Theodore Roszak
[The following excerpt from Chapter One of the book The Cult of Information,
by Theodore Roszak, is provided here for
non-commercial/educational purposes in the interest of contributing to "a
continual development of knowledge and its unhampered exchange." You
should be able to find the book at the library. Otherwise you can buy it online
at Amazon.com or BarnesandNoble.com.]
In my own life, there was a book that did more than
UNIVAC to revise my understanding of information and the machinery that
manipulated it. In 1950 the mathematician Norbert Wiener wrote a pioneering and
widely read study called The Human Use of Human Beings, a
popularized version of his classic 1948 work Cybernetics. For the general reading public
this engaging and provocative little book landmarked
the appearance and high promise of "cybernation," the word Wiener had
coined for the new automotive technology in which he could discern the
lineaments of a second industrial revolution. In the pages of his study, the
computer was still an exotic devise without a fixed name or clear image; he
quaintly refers to it as "an ultra-rapid computing machine." But even
in its then primitive state, that machine figured importantly in what was for
Wiener one of the key aspects of cybernation: "feedback," the ability
of a machine to use the results of its own performance as self-regulating
information and so to adjust itself as part of an ongoing process.
Wiener saw feedback as far more than a clever
mechanical trick; he regarded it as an essential characteristic of mind and of
life. All living things practice some form of feedback as they adapt to their
environment; here then was a new generation of machines reaching out toward the
status of a sentient animal, and so promising to take over kinds of work that
only human intelligence had so far been able to master. And not only work, but
certain other kinds of play as well. Wiener was much impressed by the research
then under way to build chess-playing machines; this served as further evidence
that machines would soon be able to process data in ways that approach the
complexity of human intelligence. "To live effectively," he
concluded, "is to live with adequate information. Thus, communication and
control belong to the essence of man's inner life, even as they belong to his
life in society."
Wiener was claiming nothing less than that, in
perfecting feedback and the means of rapid data manipulation, the science of
cybernetics was gaining a deeper understanding of life itself as being, at its
core, the processing of information. "It is my thesis," he wrote,
"that the physical functioning of the living individual and the operation
of some of the new communication machines are precisely parallel in their
analogous attempts to control entropy through
feedback."
Some five years after Wiener's book was published, a
new field of study based on his thesis announced its presence in the
universities, an intellectual hybrid of philosophy, linguistics, mathematics,
and electrical engineering. It was called artificial intelligence, or AI. The key assumption of AI was clear from
the outset; in the words of two of the discipline's founding fathers, Alan
Newell and Herbert Simon, "the programmed computer and human problem
solver are both species belonging to the genus 'Information Processing
System.'"
A few years further along (1958), and Newell and
Simon were pitching their hopes sky high:
There are now in the world
machines that think, that learn and create. Moreover, their ability to do these
things is going to increase rapidly until--in the visible future--the range of
problems they can handle will be co-extensive with the range to which the human
mind has been applied.
At the time they made the prediction, computers were
still struggling to play a credible game of checkers. But Simon was certain
"that within ten years a digital computer will be the worlds chess champion."
Wiener himself may or may not have agreed with the
glowing predictions that flowed from the new study of artificial intelligence,
but he surely did not endorse its optimism. On the contrary, he regarded
information technology as a threat to short-term social stability, and possibly
as a permanent disaster. Having invented cybernetics, he intended to function
as its conscience. The Human Use of Human Beings, as the phrase itself
suggests, was written to raise public discussion of the new technology
to a higher level of ethical awareness. Automated machines, Wiener observed,
would take over not only assembly line routine, but office routine as well.
Cybernetic machinery "plays no favorites between
manual labor and white collar labor."
If left wholly in the control of short-sighted, profit maximizing
industrialists, it might well "produce an unemployment situation, in
comparison with which... even the depression of the thirties will seem a
pleasant joke."
Two years after Wiener issued that warning, the first
cybernetic anti-utopia was written. In Player Piano Kurt Vonnegut,
Jr., who had been working in the public relations
department of General Electric, one of the companies most aggressively
interested in automation, imagines a world of intelligent machines where there
is "production with almost no manpower." Even the barbers have been
displaced by haircutting machines. The result is a technocratic despotism
wholly controlled by information technicians and corporate managers. The book
raises the issue whether technology should be allowed to do all that it can do,
especially when its powers extend to the crafts and skills which give purpose
to people's lives. The machines are slaves, Vonnegut's rebellious engineer-hero
insists. True, they make life easier in many ways; but they also compete with
people. And "anybody that competes with slaves becomes a slave." As
Vonnegut observes, "Norbert Wiener, a mathematician, said that way back in
the nineteen-forties."