LS Re: Sv: The Turing Test (was Artificial Intelligence)


Diana McPartlin (diana@asiantravel.com)
Wed, 15 Oct 1997 08:16:43 +0100


Anders Nielsen wrote:
> You seem to base your argument that computers can't be programmed to pass
> the Turing test (and therefore not be programmed to think) because
> computers can't make value-judgments, they have no feelings. And I would
> grant that were that the case, your argument would be correct, but nowhere
> do you present a case for why computers can't be programmed to do
> value-judgments, you just assume it's the case.

Yup youšre right I didnšt give a reason.

>
> How do you know that humans have feelings (can make value-judgments)?
> Well, you know that you have feelings, and you can see that other people
> act as if they had feelings, and you can talk to other people, and they'll
> respond to what you're saying as if they had feelings themselves, but you
> can't be sure. You don' t have first-hand knowledge that other people have
> feelings.
>
> So the objective is to write a computer that can act as if it had feelings
> in a convincing matter.
> (but that has been clear all along).

> Values and feelings are closely related to the direct body-stimuli (in fact
> I'd say they are all direct derivatives of direct body-stimuli), so an
> ordinary computer would have some trouble competing in this area, but you
> could equip a computer with a video-camera to make up for loss of sight,
> and similar machines for other body-functions (make a button that if you
> press it (very tamagotchi-esque), the computer has a sensation similar to
> what we experience when we're having sex (and set up a social pattern that
> says it's only proper if other people/computers press the button, never
> push it yourself...it'll rot your spine!))....These machines will
> substitute for body-stimuli.
>
> Then make a program that can learn the language and social patterns and so
> forth from what it "hears" and "sees", and given enough learning time, I
> can't see any problems with this machine becoming an AI.

It sounds straightforward enough but I think youšve glossed over the main
issue. These are two separate points. First create a computer that can see,
hear, feel etc. Second teach it to talk and understand social patterns.

The first point strikes at the very heart of the AI debate. Sure we can create
a computer with surrogate senses but does that prove that the computer
actually *experiences* them? As you said I canšt prove it doesn't and you
can't prove it does. So all we can do is agree that if the computer behaves as
if it has these experiences we will say that it does experience them.

As for teaching it language and social patterns. If you are suggesting that
the computer cannot learn these if it hasnšt actually *experienced* them
itself then I agree. But then we're right back where we started. The language
will prove the experience, but the language can only emerge if the experience
comes first. Sure, if the computer can actually experience it will become AI.
But *can* it experience? That is what needs to be proved.

> The problem I think is that you all think of computer programs as if they
> could only be:
>
> if (sentence includes "flower") {
> respond "I like flowers"
> } else if (sentence includes "mom") {
> respond "mom was nice"
> } else if (sentence includes "not") {
> respond "why are you so negative?"
>
> etc...
>
> and with this scheme I'll grant that no matter how large and complex you'll
> never get anything that is proper AI (but fortunately I don't think you'd
> ever get anything that could fool a (proper instructed and intelligent
> enough) human in a Turing test).

Yes, this is how I think of them - logical mathematic algorithms. Maybe you
could spice it up with some randomness but essentially the program must be
logical.
Am I wrong? What other way of writing them is there?

This inherent rationality of computer programming is the reason I think it is
impossible to write a program that simulates a sense of value. The objective
of the program would be to seek out and pursue dynamic quality. In order to
write it you would have to come up with a rational description of dynamic
quality. Show me how you can do it and I'll believe it's possible.

> PS:
> regarding the japanese computer-idols:
> Can you ever be sure that there isn't a human answering the questions for
> them?
> Like typing the answer, and just having a program that makes the graphic to
> make it appear that the
> program is answering?...I would imagine that's how it was made, but I
> haven't seen much of them (Japanese computer-idols aren't really the big
> thing here in Denmark).

There's plenty of stuff online. Try this
http://www.wintermute.co.uk/users/stevec/dk96/dk96.html
On the CD version of this idol, Kyoko Date, she can answer a handful of
questions. Her profile (http://www.dhw.co.jp/horipro/talent/DK96/prof_e.html)
will give you an idea. But in her animations and radio interviews there are
obviously real people behind her.

Diana

--
post message - mailto:skwok@spark.net.hk
unsubscribe/queries - mailto:diana@asiantravel.com
homepage - http://www.geocities.com/Athens/Forum/4670



This archive was generated by hypermail 2.0b3 on Thu May 13 1999 - 16:42:05 CEST