The goal of Artificial General Intelligence (AGI) is to create computer systems with human-like intelligence. AGI systems should be able to reason and learn from experience by interacting with the environment in a mostly unsupervised way. To build intelligent machines it is necessary to use developmental methods where a system develops autonomously from its interaction with the environment.
The goal is to design cognitive architectures that can autonomously develop higher cognitive abilities. However, most research so far has focused on sensory-motor (image, voice and text recognition, robots coordination, etc) development and has mainly ignored higher cognitive functions such as reasoning.
“Three notable hallmarks of intelligent cognition are the ability to draw rational conclusions, the ability to make plausible assumptions and the ability to generalise from experience. In a logical setting, these abilities correspond to the processes of deduction, abduction, and induction, respectively.”
These problems have been studied thoroughly in traditional AI with symbolic methods (such as automatic theorem proving) , sub-symbolic methods (such as artificial neural networks – ANNs), probabilistic methods (such as Bayesian networks), etc. These approaches typically focus on a proper subset of the above-mentioned types of reasoning. For instance, the symbolic approach is mainly concerned with deductive reasoning and the sub-symbolic approach with inductive reasoning.
A hybrid approach, which integrates symbolic and sub-symbolic methods – such as neural-symbolic systems – may be a plausible avenue. Hybrid approaches, however, tend to be limited by the difficulty of designing interfaces for complex interaction between the different subsystems while human reasoning processes seem to be tightly integrated with concept formation as new concepts are created continuously and become integrated with previous knowledge and involved in new reasoning processes.
From developmental psychology, evidence is accumulating that infants and children use similarity based measures to categorize objects and form new concepts. AGI could benefit from a developmental system which integrates concept formation, deduction, induction, and abduction. How to build such a system?
For me the challenge is simple: we need a new architecture of neural networks that allows the evolution of symbolic representation. Deep neural networks (DNN) already extract abstract features from images (rotations, shapes) but are not explicitly represented in their knowledge representation structure. Why? They were designed to extract but NOT to manipulate these abstractions.
I think the complexity of the problem is only apparent. Why not use a different paradigm of learning: instead of feeding the machine with our symbols, we should create space for the machine to evolve and use the symbols and representation it obtained from the environment? Of course, we will not solve the problem of transparency (the knowledge will be as opaque as it is in modern neural networks), but why should we do it? Maybe the correspondence between our symbolic representation and the machine symbolic representation of the same objects will be impossible to extract. But that’s ok as long as abandon the obsession for control and understanding of what’s happen behind the hooke. Afterall, we can teach a child without a clue of how its brain works, don’t we?