Thursday, 19 January 2017

Thinking Machines?



Last week I discussed the prospect of humans marrying robots (see http://gordonfeil.blogspot.ca/2017/01/brave-new-world.html).  There have for decades been discussions of whether a machine can ever obtain sentience. Will there ever be a machine that can feel, understand, and form opinions?  Will a machine ever be aware of its own awareness?  Think about that. In the series of posts I am doing on love, I am eventually going to suggest that your programming and you are not the same thing. Your programming can change, but you will still be there. You are simply awareness. That’s it. You are different from other species of earthly life because you not only are aware, but you are aware that you are aware.

Will a machine ever be able to be that way? That’s like asking if a machine can ever have a soul, if that term refers to personality.  (Now that might be a good discussion one day: “What is a soul?”) So I pose the question of whether a machine will ever be sentient, but is it even relevant? How could we ever tell?  Alan Turing suggested that it is not useful to ask whether a machine will ever be able to think because we don’t have a way of knowing (I think). He proposed a test based on the Impersonation Test.

That one works like this. A man and a woman each go into a separate room connected to a third room by a texting modality. In that third room is someone who knows neither and who does not know which room contains each, but does know that only one of them is a woman.  This third person asks questions of each and tries to determine which the woman is. The problem is that both are trying to convince the interrogator that they are the woman.

The Turing test has a machine in one room and a human in the other, while the interrogator might even know which is where while he tries to determine via texting which is which. Both try to prove they are the real human.  The machine that can go undiscovered as a machine for a predetermined amount of time or number of questions is deemed to have intelligence.

How does simulated intelligence differ from the real thing?  How does a shadow differ from the thing that casts it?  But is that really the right question. Consider a flight simulator. Is that computer actually flying?  You give it throttle and the simulated airplane gains speed (if you have released the brakes). You extend flaps and the simulated plane gains lift and slows from the extra drag. The results are the same as flying a real airplane (well, not really…real flying does not feel like any simulator I’ve ever used, but let’s pretend that there is one that is 100% reflective of real flying.)  How does the computer’s experience differ from the real pilot’s?  Well, of course it IS different. The pilot is aware. The computer is electronic signals. But so is the pilot’s brain. Back to the reductionism we challenged at https://gordon-feil-practical-living.blogspot.ca/2017/01/creating-reality.html.

There are no clear-cut answers on these issues, so it becomes evident why the notion of a human marrying a machine makes sense to some people.

(The next post on this topic is at

No comments :

Post a Comment