Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Why LLMs Can't Escape the Pattern-Matching Prison if They Don't Learn Recursive Compression

In this talk we will introduce and discuss SuperARC, a new proposed open-ended test based on algorithmic probability to critically evaluate claims of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), challenging standard metrics grounded in statistical compression and human-centric tests. By leveraging Kolmogorov complexity rather than Shannon entropy, the test measures fundamental aspects of intelligence, such as synthesis, abstraction, and planning or prediction. Comparing state-of-the-art Large Language Models (LLMs) against a hybrid neurosymbolic approach, we identify inherent limitations in LLMs, highlighting their incremental and fragile performance, and their optimisation primarily for human-language imitation rather than genuine model convergence. These results prompt philosophical reconsideration of how intelligence—both artificial and natural—is conceptualised and assessed.

Top