In this talk we will introduce and discuss SuperARC, a new proposed open-ended test based on algorithmic probability to critically evaluate claims of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), challenging standard metrics grounded in statistical compression and human-centric tests. By leveraging Kolmogorov complexity rather than Shannon entropy, the test measures fundamental aspects of intelligence, such as synthesis, abstraction, and planning or prediction. Comparing state-of-the-art Large Language Models (LLMs) against a hybrid neurosymbolic approach, we identify inherent limitations in LLMs, highlighting their incremental and fragile performance, and their optimisation primarily for human-language imitation rather than genuine model convergence. These results prompt philosophical reconsideration of how intelligence—both artificial and natural—is conceptualised and assessed.
Next Article in event
Why LLMs Can't Escape the Pattern-Matching Prison if They Don't Learn Recursive Compression
Published:
03 April 2025
by MDPI
in The 1st International Online Conference of the Journal Philosophies
session General Session
Abstract:
Keywords: Abstraction and Reasoning Corpus (ARC), Artificial General Intelligence, prediction, compression, program synthesis, inverse problems, symbolic regression, comprehension, Superintelligence, Generative AI, symbolic computation, hybrid computation, Neurosym
