Please login first
Why LLMs Can't Escape the Pattern-Matching Prison if They Don't Learn Recursive Compression
1  School of Biomedical Engineering and Imaging Sciences and King's Institute for Artificial Intelligence, King's College London
Academic Editor: Marcin Schroeder

Abstract:

In this talk we will introduce and discuss SuperARC, a new proposed open-ended test based on algorithmic probability to critically evaluate claims of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), challenging standard metrics grounded in statistical compression and human-centric tests. By leveraging Kolmogorov complexity rather than Shannon entropy, the test measures fundamental aspects of intelligence, such as synthesis, abstraction, and planning or prediction. Comparing state-of-the-art Large Language Models (LLMs) against a hybrid neurosymbolic approach, we identify inherent limitations in LLMs, highlighting their incremental and fragile performance, and their optimisation primarily for human-language imitation rather than genuine model convergence. These results prompt philosophical reconsideration of how intelligence—both artificial and natural—is conceptualised and assessed.

Keywords: Abstraction and Reasoning Corpus (ARC), Artificial General Intelligence, prediction, compression, program synthesis, inverse problems, symbolic regression, comprehension, Superintelligence, Generative AI, symbolic computation, hybrid computation, Neurosym

 
 
Top