__/ [ Stephen Fairchild ] on Tuesday 26 September 2006 23:10 \__
> Roy Schestowitz wrote:
>
>> __/ [ Oliver Wong ] on Tuesday 26 September 2006 22:06 \__
>>
>>> "B Gruff" <bbgruff@xxxxxxxxxxx> wrote in message
>>> news:4nsq7pFbu0fqU1@xxxxxxxxxxxxxxxxx
>>>>
>>>> In Manchester, where Turing made his flawed philosophical assumption
>>>> that set academic AI haring down the wrong path for forty years.
>>>
>>> This author is the first person I've heard who phrased so strongly
>>> that
>>> the Turing Test is the wrong approach for detecting intelligence. Most
>>> others either agree with Turing's approach, or are unsure, but have no
>>> better suggestions. Because the author just dismissed the Turing Test
>>> without explaining what is wrong with it, I'm not sure whether the he has
>>> an educated opinion, or has completely misunderstood the test, or is just
>>> trying to write something provocative to garner more attention.
>>
>> He merely offers an alternative approach, which is perhaps more complex.
>> The mind doesn't quite work in a simple imperative-like manner. Neural
>> networks work (pseudo-)simultaneously and drive towards an outcome.
>>
>> He works on chip design, so he wouldn't just dismiss Turing's work (Turing
>> is among the greatest sources of pride for CS/Math in the University). And
>> he wouldn't provoke as you suggest, trust me. He's a gentleman who keeps
>> low profile; and he is a Fellow of the Royal Society.
>
> You would need to create a machine to genuinely think it is living a human
> life before you could stand a realistic chance of creating a Turing Test
> winner. This is much harder odds than creating merely a true machine
> intelligence IMO.
>
> The current attempts at beating the Turing Test are going down the avenue
> of stock replies and syntactic and semantic analysis which in the end just
> gives you a better human language parser.
>
> The only intelligence being demonstrated in the Turing Test is that of the
> programming teams.
Programming team member multiplied by the joint intelligence and experience
of a few. Plus brute force, which is where all the so-called power actually
lies...
I think it would be intersting to develop processors that work in parallel in
a collaborative and neural-type fashion. It's too ambitious a goal to even
suggest, but we might get there one day. At the moment, neural networks code
are being translated to simple machine code and are run in inefficient
ways... would be kind of funky if companies continues to develop
architectures that better suit machine learning. Back in the days when clock
speeds weren't as high as they are today (permitting some fun stuff in 2-D
and sometimes real-time 3-D as well), one had to build machines that are
optimised for machine visions. That's actually what my Supervisor worked on
for many years. So he sings those stories about the times when you had to
build your computers and physically design experiments rather than use
high-level P/L's.
Best wishes,
Roy
--
Roy S. Schestowitz | Anonymous posters are more frequently disregarded
http://Schestowitz.com | SuSE Linux | PGP-Key: 0x74572E8E
11:20pm up 68 days 11:32, 9 users, load average: 0.58, 0.95, 0.82
http://iuron.com - Open Source knowledge engine project
|
|