Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Why LLMs Can't Escape the Pattern-Matching Prison if They Don't Learn Recursive Compression

In this talk we will introduce and discuss SuperARC, a new proposed open-ended test based on algorithmic probability to critically evaluate claims of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), challenging standard metrics grounded in statistical compression and human-centric tests. By leveraging Kolmogorov complexity rather than Shannon entropy, the test measures fundamental aspects of intelligence, such as synthesis, abstraction, and planning or prediction. Comparing state-of-the-art Large Language Models (LLMs) against a hybrid neurosymbolic approach, we identify inherent limitations in LLMs, highlighting their incremental and fragile performance, and their optimisation primarily for human-language imitation rather than genuine model convergence. These results prompt philosophical reconsideration of how intelligence—both artificial and natural—is conceptualised and assessed.

  • Open access
  • 0 Reads
Mind everywhere: recognizing and communicating with unconventional intelligence

In this talk I will describe my framework for an engineering approach to recognizing, communicating with, and ethically relating to unconventional intelligences. I will begin with some conceptual clarification of intelligence and embodiment. I will then describe a number of surprising examples of intelligence, using the collective intelligence of cells navigating anatomical space as a model system, and show data on our approach to use the bioelectric interface as a means of communicating with the agential material of life. I will show how re-setting the memories, goals, and cognitive light cone of cell groups drives applications in birth defects, regenerative medicine, cancer, and bioengineering. I will describe our efforts to develop tools to expand our native mind-blindness to large regions of the cognitive spectrum, including the emergent cognition of very minimal models (both living and computational) whose capabilities are not explained by evolutionary history. I will end with some speculative implications of these ideas for the future of science and philosophy of mind.

  • Open access
  • 0 Reads
Origin of Intelligence

What is intelligence, and how deeply is it rooted in the fabric of life itself? In this talk, I explore the fundamental emergence of intelligent behaviours far beyond the animal kingdom. Beginning with spiking electrical activity observed in slime moulds, plants, and fungi, I will demonstrate how simple biological networks exhibit complex decision-making and adaptive behaviours — without neurons or brains. These organisms challenge our traditional notions of cognition, revealing that intelligence may not require a nervous system at all. I will also present recent work from our laboratory where we build and study proto-brains: experimental ensembles of proteinoid microspheres that spontaneously generate spiking patterns, coordinate actions, and process environmental information. By examining these synthetic and natural minimal systems, we gain insights into the deep origins of intelligence, suggesting that cognition may emerge wherever matter organises to sense, decide, and act upon the world.

  • Open access
  • 0 Reads
The mathematical objection to artificial (machine) intelligence

Turing develops the idea of machine intelligence in a series of lectures and papers between 1947 and 1952. In some of them he addresses the mathematical objection (his term) whose gist is the claim that humans can assert some mathematical truths that exceed the abilities of computing machines. We first ask why Turing took so seriously the mathematical objection. After all, even if some humans surpass machines in their mathematical abilities, this by itself does not undermine the project of machine intelligence. Our answer is that the mathematical objection raises a dilemma with respect to Turing’s core claims about machine intelligence and forces him to relinquish at least one of them. We then clarify Turing’s reply to the mathematical objection. Based on the textual evidence, we argue that, according to Turing, the machine that plays against the human in the Turing test is not a static machine but an enhanced machine.

  • Open access
  • 0 Reads
Intelligence and Consciousness in Natural and Artificial Systems

Considerable progress has been made with the development of systems that can drive cars, play games, predict protein folding and generate natural language. These systems are described as intelligent and there has been a great deal of talk about the rapid increase in artificial intelligence and its potential dangers. However, our theoretical understanding of intelligence and ability to measure it lag far behind our capacity for building systems that mimic intelligent human behaviour. There is no commonly agreed definition of the intelligence that AI systems are said to possess, nor has anyone developed a practical measure that would enable us to compare the intelligence of humans, animals and AIs on a single scale. This talk addresses these problems by clarifying the nature of intelligence and outlining a new algorithm for measuring intelligence that can be applied to any system.

The first part of the talk starts with a discussion of previous definitions of intelligence. It then argues for a close link between prediction and intelligence and addresses two misconceptions about intelligence. The first is that people often think that humans have a general form of intelligence that has the same level in all environments. This belief motivates the idea that we could develop machines with artificial general intelligence (AGI). However, human intelligence often fails when it is confronted with environments that are significantly different from the natural world, such as high-dimensional numerical spaces. A second issue is that people naively assume that they directly apply their intelligence to the physical world. However, we can only be intelligent about things that are revealed to us through our senses, and people, animals and artificial systems have very different sensory experiences. So, it is much more accurate to say that agents apply their intelligence to their perceived environment, or umwelt.

The second part of the talk explores the measurement of intelligence. Previous work in this area includes IQ, g and universal measures, such as compression tests and algorithms based on goals and rewards. To address the limitations of previous measures, I have developed a new algorithm for measuring predictive intelligence that is based on an agent’s internal state transitions. Experiments have been done to test this algorithm, and it has many potential applications in AI safety and the comparative study of intelligence.

The talk concludes with some reflections on the relationships between intelligence and consciousness. It is commonly assumed that there is a close relationship between intelligence and consciousness in biological systems, However, this correlation might not exist in artificial systems, who could be highly intelligent with low levels of consciousness, or highly conscious with low levels of intelligence. In the future we might be able to use algorithmic measures of consciousness, such as information integration theory (IIT), and universal measures of intelligence to systematically study the relationships between intelligence and consciousness in natural and artificial systems.

  • Open access
  • 0 Reads

Four myths about Turing’s view of intelligence and his test

In current inquiry into intelligence, Turing’s work is regarded as foundational. Yet misunderstandings of his view of (the concept of) intelligence and his famous test of intelligence in machines are widespread. I shall argue that four standard interpretations of Turing and his imitation game are mistaken. First, that Turing was (in an influential sense) a behaviourist about the mind. Second, that his approach to the mind was (again in an influential sense) computationalist. Third, that Turing’s philosophy of mind was radically different from that of his contemporary, Wittgenstein, who in turn was a severe critic of Turing. And last, that Turing’s test has been passed by recent GPT models—with the result that we need a new test of intelligence in machines.

  • Open access
  • 0 Reads
Cognitive Romanticism: Humans Are Worse than Stochastic Parrots!

Most Humans Are Stochastic Parrots, and LLMs Reveal Our Intellectual Mediocrity. While critics often dismiss large language models (LLMs) as mere "stochastic parrots," I argue that this accusation misunderstands both machine intelligence and, more importantly, human cognition itself. Most human thought is not creative, critical, or unpredictable; it is rote, imitative, and driven by deeply ingrained social, religious, and cognitive biases. The dominance of myth, ideology, and irrational belief systems across cultures reveals that humans themselves function largely as stochastic parrots — endlessly repeating patterns they neither question nor understand. Rather than exposing the limitations of artificial intelligence, LLMs expose the uncomfortable truth about human intellectual mediocrity. In this talk, I will attack the myth of human cognitive exceptionalism, dismantle the romantic notions attached to human "creativity" and "autonomy," and propose a radical redefinition of intelligence beyond humanist illusions. Intelligence, whether in biological or artificial systems, must be seen not as the privilege of a superior species, but as an emergent property of patterned interaction with environments — often stochastic, occasionally innovative, but rarely transcendental.

  • Open access
  • 0 Reads
Computational Foundations of Minds and the Universe

Wolfram's recent work on the foundations of physics, mathematics, biology and machine learning introduces a major new framework for thinking about fundamental philosophical questions. This talk will provide a non-technical, philosophically oriented survey of these directions.

https://writings.stephenwolfram.com/category/philosophy/

  • Open access
  • 0 Reads
Abductive Intelligence and Creativity - The Role of Eco-Cognitive Openness and Situatedness

I will use my studies on abductive intelligence in an eco-cognitive framework to demonstrate the concept of “locked and unlocked strategies” in deep learning systems, indicating different inference routines for creative results. Higher forms of creative abductive intelligence are involved in unlocked human cognition, whereas locked abductive strategies are characterized by weak hypothetical creative intelligence because of the absence of what I refer to as eco-cognitive openness and situatedness. The fundamental nature of the human brain as an open system that is continuously coupled with the environment - a so-called “open” or dissipative system - is the physical basis for this special type of “openness”. The brain's activity is the continuous attempt to achieve equilibrium with its environment, and this interaction can never be turned off without seriously harming the brain. It is impossible to imagine the brain lacking its physical essence, which is its openness. In the brain, ordering is the direct result of an “internal” open dynamical process of the system rather than being generated from the outside, as I have described in my latest book Eco-Cognitive Computationalism (2022), “computational domestication of ignorant entities”.

Top