Fenrir Logo Fenrir Industries, Inc.
Forced Entry Training & Equipment for Law Enforcement






Have You Seen Me?
Columns
- Call the Cops!
- Cottonwood
Cove

- Dirty Little
Secrets

>- Borderlands of
Science

- Tangled Webb
History Buffs
Tips, Techniques
Tradeshows
Guestbook
Links

E-mail Webmaster








"I Think, Therefore..."

Some time around the fifth century BC, the Greeks made the first serious attempt to define a human being. The early tries were not too successful. The Platonic definition, "Man is a featherless animal with two feet," induced Diogenes to show up with a plucked chicken (perhaps the original chicken joke). The amended definition, adding "without claws," was not very persuasive.

Aristotle, the greatest philosopher of ancient times, seems to have been the first person to declare that Man was best defined as a rational animal. He said nothing about Woman, who in the fourth century BC was not considered a suitable subject for philosophy, and the reason he offered to justify his statement would not impress today's average six-year-old. Aristotle said that humans are rational because they can do arithmetic, while no animal can.

The Greek notation for numbers was awkward, the multiplication table was vastly complicated, and only clever people could do more than the simplest calculations. It seemed reasonable at the time to equate being rational, and able to think, with the ability to calculate.

Not any more, when for a few dollars you can buy an electronic calculator that can outperform any human in speed and accuracy. Computers check spelling and grammar better than any man or woman. They can compare different prose styles well enough to mark passages in the works of famous writers as being by another hand. Today we are close to the point where computers can calculate, and people cannot. I have been in a gas station where the cash register was broken and the girl in the checkout line could not work out the change from a $20 bill. Rather than trust my arithmetic, she called the manager.

We need some quite different definition of thinking. We particularly need it when we consider the field of Artificial Intelligence, or AI. If we are unable to define what we mean by thinking, how will we decide if some future computer that we build is intelligent and actually able to think?

I am not proposing to offer answers, since I do not have them and I am not persuaded that anyone does. Instead, I am going to offer a series of questions that illustrate some of the difficulties.

QUESTION: Can something (living creature or computer) be said to think if it not also self-aware?

This is important, because we know that computers carry out incredibly complex tasks of calculation. Most people would say that such computers are not really thinking, because the computer is not aware of what it is doing.

Very well. A dog, when it chases a ball, certainly knows what it is doing. A dog is also self-aware (though not, as anyone will testify who has witnessed a dog's uninhibited personal habits, self-conscious). However...

QUESTION: Is the dog that chases a ball, and may need to negotiate multiple obstacles in order to catch it, actually thinking?

I don't know, but a dog would certainly fail the Turing test. In 1950, Alan Turing tried to avoid all questions of self-awareness and consciousness by proposing a practical plan. Put a human on one end of a phone line and a computer on the other. If the human cannot decide, by asking questions and examining the answers, whether he is dealing with a computer or with another human, then the machine has passed the test and should be regarded as able to think, a "thinking machine." The computer must of course be allowed to lie when asked certain direct questions about itself.

QUESTION: Is the Turing test sufficient to define thinking and intelligence?

Most people today say that it is not. The usual objection to the Turing test is that it simply moves the problem from one place to another, from the machine to the questioner. The computer may be judged to think, or not to think, depending on the intelligence of the human who interrogates it. A smart computer program might fool a stupid person, but not a smart person. Yet no one would deny that both those people can think. We find unacceptable any definition of thinking that is dependent on the thinking ability of something else.

I want to mention another problem with the Turing test. To illustrate it, suppose that at the other end of the phone line I place not a computer, but an ants' nest.

Anyone who has studied an ants' nest, or a beehive, or a termite mound, learns two things. First, that the individual insects seem to have minimal intelligence and even less free will (in E. B. White's book, "The Once and Future King," the sign at the entrance to the ants' fortress reads, EVERYTHING THAT IS NOT FORBIDDEN IS COMPULSORY). Second, the nest or hive as a whole runs in a way that suggests an enormous sense of purpose and intention, and even of ingenuity. Arbitrary damage to a nest is promptly and efficiently repaired.

Maybe an ants' nest has intelligence, of its own alien kind; maybe it thinks, in a way that we do not understand. And maybe computers, if and when they finally achieve intelligence, will also think in a way that we cannot comprehend. One weakness of the Turing test is that it can only be applied when the intelligence we are dealing with (computer or living thing) is sufficiently like us in its thought processes to permit communication. As Konrad Lorenz remarked, "If a lion could speak, we would not understand it."

You might at this point be asking, does any of this really matter?

QUESTION: Is it important to know if a machine can think, or when a computer finally becomes self-aware?

That is one of the few questions that I can answer. It is not important - today. But at some point in the future it will become vastly significant. That point will be reached the day that some computer, arguing that it and its brethren are conscious thinking beings, demands certain inalienable rights. Among these will be the right to a continued existence, and the right to be a free agent not belonging to any other individual or organization. The petition for other freedoms and privileges will follow.

QUESTION: When will a computer first be given the right to vote?


Copyright-Dr. Charles Sheffield-2001  

"Borderlands of Science" is syndicated by:


"Borderlands of Science"
by Dr. Charles Sheffield

Dr. Charles Sheffield



Dr. Charles Sheffield was born and educated in England, but has lived in the U.S. most of his working life. He is the prolific author of forty books and numerous articles, ranging in subject from astronomy to large scale computing, space trasvel, image processing, disease distribution analysis, earth resources gravitational field analysis, nuclear physics and relativity.
His most recent book, “The Borderlands of Science,” defines and explores the latest advances in a wide variety of scientific fields - just as does his column by the same name.
His writing has won him the Japanese Sei-un Award, the John W. Campbell Memorial Award and the Nebula and Hugo Awards. Dr. Sheffield is a Past-President of the Science Fiction Writers of America, and Distinguished Lecturer for the American Institute of Aeronautics and Astronautics, and has briefed Presidents on the future of the U.S. Space Program. He is currently a top consultant for the Earthsat Corporation




Dr. Sheffield @ The White House



Write to Dr. Charles Sheffield at: Chasshef@aol.com



"Borderlands of Science" Archives