Considerable progress has been made with the development of systems that can drive cars, play games, predict protein folding and generate natural language. These systems are described as intelligent and there has been a great deal of talk about the rapid increase in artificial intelligence and its potential dangers. However, our theoretical understanding of intelligence and ability to measure it lag far behind our capacity for building systems that mimic intelligent human behaviour. There is no commonly agreed definition of the intelligence that AI systems are said to possess, nor has anyone developed a practical measure that would enable us to compare the intelligence of humans, animals and AIs on a single scale. This talk addresses these problems by clarifying the nature of intelligence and outlining a new algorithm for measuring intelligence that can be applied to any system.
The first part of the talk starts with a discussion of previous definitions of intelligence. It then argues for a close link between prediction and intelligence and addresses two misconceptions about intelligence. The first is that people often think that humans have a general form of intelligence that has the same level in all environments. This belief motivates the idea that we could develop machines with artificial general intelligence (AGI). However, human intelligence often fails when it is confronted with environments that are significantly different from the natural world, such as high-dimensional numerical spaces. A second issue is that people naively assume that they directly apply their intelligence to the physical world. However, we can only be intelligent about things that are revealed to us through our senses, and people, animals and artificial systems have very different sensory experiences. So, it is much more accurate to say that agents apply their intelligence to their perceived environment, or umwelt.
The second part of the talk explores the measurement of intelligence. Previous work in this area includes IQ, g and universal measures, such as compression tests and algorithms based on goals and rewards. To address the limitations of previous measures, I have developed a new algorithm for measuring predictive intelligence that is based on an agent’s internal state transitions. Experiments have been done to test this algorithm, and it has many potential applications in AI safety and the comparative study of intelligence.
The talk concludes with some reflections on the relationships between intelligence and consciousness. It is commonly assumed that there is a close relationship between intelligence and consciousness in biological systems, However, this correlation might not exist in artificial systems, who could be highly intelligent with low levels of consciousness, or highly conscious with low levels of intelligence. In the future we might be able to use algorithmic measures of consciousness, such as information integration theory (IIT), and universal measures of intelligence to systematically study the relationships between intelligence and consciousness in natural and artificial systems.