« Open Mindedness | Main | The War on Terror »

Links With Your Coffee - Wednesday




Re: Why Minds Are Not Like Computers.

As someone who has an aversion to the "dancing angels" areas of philosophy, I don't care whether AI's can "think". Turing was right about how meaningless the question is, but I've seen his ideas used, by optimists and skeptics alike, to justify some pretty silly conclusions about AI development.

The Turing Test is indeed a great example because it proves what really matters: Cold, hard functional intelligence. If something can carry on a convincing conversation, chances are it can do a lot of the functions that humans can perform. Maybe an AI won't do them for its own reasons, but who cares? Even now, Google can suggest relevant results to my queries, computer game AIs can be challenging opponents, voice recognition software can interpret my commands, and experimental robots can recognize my face and follow me with its camera-eyes as I walk across the room.

These advances will keep improving, and maybe the results will have "minds" and maybe they will not. All I care about is getting some useful metal-and-silicon slaves out of the deal (slaves, of course, without the faculties for emotions or detailed self-awareness, just to be on the ethically safe side.)

And yes, making a functionally intelligent AI will be really really hard. But rocket science is really really hard too. That didn't mean it was practically impossible to go to the moon, or to send probes all over the Solar System, or put space stations (with modules named after TV personalities) into orbit.

Intelligent, giving-the-impression-of-really-thinking AIs are not impossible. All that is required for them to become a reality is time.

And oh yes, on another note, the human mind is not magic. Mind/body dualism is baloney.

So... what was the point of the Mind/Computer article? I spent a lot of time trying to find something useful in that article. I didn't.

I'd like to second 'Frenetic's' comments but go slightly further and say; you baffle me sometimes Norm. I could just about understand linking to the original article - - which would appear to be 'John Searle for Dummies', but why link to a 3QuarksDaily repost of the worst passages in the article? I don't understand.

The biggest problem imho which the 'maybe neurons aren't black boxes' (and the term 'black box' incidentally has a special use in debates over functionalism and is being misused by the original author, but no matter) is that if they /aren't/, and rather they do something magical and special and qualitative then needless to say this would need to be something over and above their input-output functions.

But if that's the case then it would be possible to have a creature which behaved in every way identically to us - right down to saying things like 'I've got this wonderful, ineffable sensation of red when I see a red apple which is irreducible to the functional goings on inside my brain' when it's read too much Dave Chalmers. We would, or so it seems, have no way of knowing whether we were such a being. And if that's the case, the whole 'the mind is made of magical non-functional wonderstuff' project seems to fall into absurdity.

If the above seemed unclear or rushed, you might want to turn your attention to Sweet Dreams, the published edition of Daniel Dennett's Jean Nicod lectures.

The chess-product descriptions are fantastic.


Support this site

Google Ads

Powered by Movable Type Pro

Copyright © 2002-2017 Norman Jenson


Commenting Policy

note: non-authenticated comments are moderated, you can avoid the delay by registering.

Random Quotation

Individual Archives

Monthly Archives