Is Deep Learning capable to solve the AI puzzle?
Deep Learning is the buzz word of the moment. But can it revolutionize artificial intelligence as some suggests? There is good reason to be excited about deep learning, a sophisticated “machine learning” algorithm that far exceeds many of its predecessors in its abilities to recognize syllables, images, voice and videos.
Deep Learning is beating all benchmarks records at an amazing speed and their feats don’t stop to amaze:
- They are capable to read through images (extract labels, objects and event perform facial recognition with accuracy as good or even better then humans)
- They can understand human voice and perform automatic translation (Skype is heavily using this)
- They can even learn how to cook just by “viewing” hundreds of youtube videos
Are these neural networks the missing link to understand the brain and create real Artificial Intelligence (AI)? There’s good reason to be skeptical. While advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking, deep learning takes us, at best, only a small step toward the creation of truly intelligent machines. Deep learning is important work, with immediate practical applications. But it’s not as breathtaking as some suggest.
The technology has its roots in a tradition of “neural networks” that goes back to the eighties. Geoff Hinton, helped build more complex networks of virtual neurons that were able to circumvent some limitations of the Perceptron. He included a “hidden layer” of neurons that allowed a new generation of networks to learn more complicated functions (like the exclusive-or). Even the new models had serious problems though. They learned slowly and inefficiently and couldn’t master even some of the basic things that children do, like learning the past tense of regular verbs. By the late nineteen-nineties, neural networks had again begun to fall out of favor.
Hinton made important advance in 2006, with a new technique that he dubbed deep learning. The algorithms is based on learning through a large set of data, and on its own asked to classify elements of that data into categories, a bit like a child who is asked to sort a set of toys, with no specific instructions. The child might sort them by color, by shape, or by function, or by something else. Scientists do this on a grandiose scale, seeing, for example, millions of handwritten digits, and making guesses about which digits looks more like one another, “clustering” them together based on similarity. Deep learning’s important innovation is to have models learn categories incrementally, attempting to nail down lower-level categories (like letters) before attempting to acquire higher-level categories (like words). Google used this techniques for google maps, to automatically read street numbers from millions of roads around the world with astonish accuracy.
Deep learning excels at this sort of problem, known as unsupervised learning. In some cases it performs far better than its predecessors. It can, for example, learn to identify syllables in a new language better than earlier systems. But it’s still not good enough to reliably recognize or sort objects when the set of possibilities is large. The much-publicized Google system that learned to recognize cats for example, works about seventy per cent better than its predecessors. But it still recognizes less than a sixth of the objects on which it was trained, and it did worse when the objects were rotated or moved to the left or right of an image.
Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.
The most powerful A.I. systems, like Watson, the machine that beat humans in “Jeopardy,” use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.
Norvig, the head of AI at Google wrote a brilliant review of the previous work on getting machines to understand stories, and fully endorsed an approach that built on classical “symbol-manipulation” techniques. Norvig’s group is now working within Hinton, and Norvig is clearly very interested in seeing what Hinton could come up with. But even Norvig didn’t see how you could build a machine that could understand stories using deep learning alone.
In my opinion, the greatest problem for AI is go beyond simple pattern recognition and extract meaning from data. Not that machines can’t learn meaning and genuinely discern a music from a text. But that can never be achieved if keep train them, as, well… machines. But that is another post.
To paraphrase an old parable, Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.