One can’t watch very much TV today without seeing an ad for IBM, or someone else touting how Artificial Intelligence (AI) is changing everything.  I watched an interview with Satya Nadella of Microsoft along with the SAP and Adobe CEOs  where they went on and on about AI this and AI that.  I even had a financial software sales guy call me about how they were using AI to scan price data and had been doing so for 15 years already.  I always chuckle when I see these types of things.  Ok yeah sure. But is it really AI?  I think most of it is just really smart programming.

One of the concepts in AI is general purpose versus special purpose.  Think of your Android or even Apple smartphone.  It is pretty cool that you can just talk into it and the “AI” will come back with some usually relevant information in response.  That is some special purpose AI, usually coming out of the Search universe.  Really smart programming.  Automated cars are another example.  Using sensors and applying reams of logic to the input, some really smart special purpose AI programming is doing a passable job of moving a large object through the 3D universe, at least in optimal weather conditions.  Of course an Audi Q5 I was just driving had its sensors sqwauking at me constantly when we had a huge snow storm here recently.  I would not have wanted to trust the AI to drive me around in those conditions.  Still, nVidia stock has shot higher in the last few years as their video silicon is well suited to running fast enough to do these AI sorts of things, though much may be because their chips are also used for crunching the numbers in bitcoin mining.

I do not think though that the AI out there today passes the Turing Test, which is generally accepted as the hurdle for general purpose AI.  If the program makes you think it is human, then it is AI.  I recently read an article about how Deep Mind is exhibiting characteristics of intuition.    That is pretty cool, but still only a step on the way to true AI.  Another article discussing the machine learning approach to AI is here.  But see another article here  about the perils of overpredicting the sentience of AI.  Or this from MIT about some of the problems and ethical issues arising as AI gets closer to true AI .

I am having trouble finding a link to an interview with another MIT professor, one of the pioneers of AI, who thinks that the true or general purpose AI we imagine is still a long way off.  Of course one of the issues is that we are facing a technological dead end in computing.  Moore’s Law has stopped working.  Intel and other chip  manufacturers have reached the limit of how many transistors they can squeeze into a given area of silicon.  With manufacturing processes now etching channels at 14 nanometers and below, you start to get quantum effects that interfere with chip function.  Sure you get multiple cores on a CPU now, but parallelizing things, while allowing for more improvement in speed, is not the same as doubling chip capacity every couple of years.  It has limits.  So unless we get photonic chips or a breakthrough in quantum computing, the sheer power needed to create generalized AI might be impeded.

Finally a thought on the Turing Test itself.  The Turing Test suffers from the same inexactitude as the Taxonomical system developed by Linnaeus.  Scientists have long classified species by observing visual characteristics of the subjects of study.  This is essentially how the Turing Test works as well.  You observe what the AI says or does and if that looks human then it is AI.  If a bird has certain physical characteristics then it is of X species.  Of course we now have a much better way to classify animals, by extracting and cataloguing their DNA.  The new taxonomy allowed by the knowledge of DNA is much better than the one based on observable physical characteristics.  But is there an analogous way to get at the idea of self-awareness, which is essentially what general purpose AI will have if it is truly AI?  We cannot yet do such a thing with humans.  Self-awareness is an entirely subjective thing, not observable.  This gets at the whole concept of duality and DesCartes Cogito Ergo Sum and the effect of the observer on the observed, too much to go into here.  In any event, really smart people will continue to get better at developing pockets of AI here and there, but the prospect of true AI still lies well down the road, where it has been just out of reach over the horizon since people have been talking about it.

Comments are closed.