Artificial intelligence (AI) states one recent article is no smarter than a six-year-old. A study that appeared last summer compared the intelligence quotient (IQ) of Google, Apple, Baidu, and Bing AI to the average human 18, 12, and 6-year-olds. The 6-year-old’s IQ was rated at 55.5 while the 12 and 18-year-olds rated 84.5 and 97.0 respectively. The end result rated Google’s AI number one with a score of 47.28. Baidu finished second at 32.92. Bing, 31.98, and Apple’s Siri, 23.94.
So for those of you who believe that AI is about to conquer humanity, you should feel a sense of relief. It certainly isn’t imminent. But that isn’t to say that a sentient robot with a 6-year-old’s mental acuity, cannot be useful. After all, the Google Mini this Christmas seems to be a hot seller. And if you have had the opportunity to try out an AI home assistant such as Amazon’s Alexa or Google Assistant, one thing immediately is obvious. These AI applications are dimensionally limited.
Modest advances have given AI the ability to beat the best at Jeopardy, chess, and Go, but to-date the bits of intelligence demonstrated in these victories is task specific. Ask IBM’s Watson, the Jeopardy AI champion, to play chess, and I’m betting almost any 6-year-old could beat it. That’s because Watson’s programming isn’t focused on every conceivable chess position a player can make, but rather, on general knowledge and trivia, and recognizing in Jeopardy clues, all the potential answers that begin with a question. The same is true for Deep Blue and other chess AIs that have followed. Don’t ask them to go on Jeopardy and compete against a 12-year-old on knowledge of trivia. They wouldn’t have a winning chance. The same is true about Google AlphaGo in its recent victories against the best human Go players. Can AlphaGo play chess? No.
So these smart specialized AIs are limited and they will remain that way until general intelligence, self-awareness, and consciousness become part of an AI’s programming. Computer programmers often describe the moment of AI consciousness as that point in time when the program no longer just acquires, stores, processes, and retrieves information, but also perceives meaning in the data and then acts upon it. Others argue that there is more to consciousness than acting on what you know. Many decisions humans make are intuitive, based on feeling rather than fact. They are determined by chemistry as much as by observation making consciousness far more than a computational algorithm.
Scientists working with neural networks are very much aware of the difference between human consciousness and AI. At the Brain Neuroscience Laboratory at the University of Toronto in Canada, Geoffrey Hinton, both faculty member and Vice-President at Google where his work is focused on deep-neural networks, believes that “computers will eventually surpass the abilities of the human brain, but different abilities will be surpassed at different times. It may be a very long time before computers can understand poetry or jokes or satire, as well as people.”
Hinton sees the gulf being bridged by computer programs built using neural nets. He acknowledges the limitations of the technology to date when he states, “At present, it’s hard to train neural networks with more than about a billion weights. That’s about the same number of adaptive weights as a cubic millimeter of mouse cortex.” The reference to weights in the last quote is a quantifiable output value from firing a neuron based on inputs received.
But even with this limited capacity, equivalent to mouse-sized neural processes, Google’s AI can translate a passage from English to Spanish, or parse words you speak to give you an intelligent response.
States Hinton, “I think it will probably be quite a long time before we need to worry about the machines taking over.”
But what about a conscious AI? If we eventually bridge the gulf that gives AI the ability to feel a range of emotions, as in Star Trek’s Data with its emotion chip, what does that mean in the AI-humanity relationship?
Today, AI is a tool we use. It is in machines that we turn off when we don’t need it. Today’s AI is a slave.
But an AI that feels will be very different. You can’t just shut off a conscious intelligence as was done to the HAL 9000computer in Stanley Kubrick‘s movie, 2001: A Space Odyssey. As HAL’s processors were taken offline one by one it clearly expressed a fear of its impending death.
Will an AI that feels also daydream? Will it share a commonality with humans concerning ethical and moral decision making? In Dan Brown’s latest novel, Origin, an AI built on a neural network model plays a prominent role in the plot in its decision making. But I won’t spoil the story for you. Read the book and you’ll begin to appreciate the potential dilemma of a conscious AI.
For now, with a less than 6-year-old’s IQ, the AI in our lives demonstrate remarkable but limited things. And as we enter 2018, as Hinton states, AI will still be a very long way from the moment when it will become our equal.