Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Rethinking intelligence: the problems of the representational view in the era of LLMs

Large language models (LLMs), such as ChatGPT, seem to exhibit human-level intelligence on a wide range of cognitive tasks. This raises the following questions: is such behavior enough to determine the kind of intelligence current AI systems possess? Is this seemingly intelligent behavior enough to attribute to AI the same kind of intelligence we commonly attribute to humans? Pioneer studies (Bubeck et al., 2023) suggest that systems such as ChatGPT start to come close to the generality observed in human intelligence. Such approach has been characterized as using a methodology of “black-box” interpretability. However, “black-box” methodologies are commonly criticized because they ignore the inner functioning of a model, by just analyzing the inputs and outputs of a given system, and because they assume, arguably without good reasons, that the same kinds of tasks that measure human intelligence might also shed some light into LLMs’ intelligence. The argument, inspired by Dretske (1993) and recently used by Grzankoswki (2024), about how to find traces of “real intelligence” in such systems and why “black-box” approaches are potentially misleading has the following form:

1) Intelligence depends on a) mental representations of the correct kind (semantic kind) and b) those same relevant mental representations have to be “used” by the system in order to cause behavior.
2) Black-box approaches to LLMs fail to account for a) and b).
3) Then, we have no reasons to think that there is intelligence in LLMs (at least according to black-box approaches).

The main goal of my paper is to argue against premise 1) by criticizing both a) and b) as conditions for intelligence attribution. I remain slightly neutral about 2) and 3).

As a first step, I problematize the concept of mental representation itself as a scientific or natural kind. Consequently, I also show that there are fruitful candidates for definitions of intelligence that do not appeal to mental representations at all. More specifically, in the same spirit as arguments proposed by Ramsey (2017), I show how there can be a useful “divorce” between cognitive science and mental representation, specifically when it comes to intelligence. This objection also draws on recent objections to mental representations as scientific kinds (Facchin, 2023) or as natural kinds (Allen, 2017). I show that, in the relevant sciences, different definitions of intelligence have been proposed which are compatible with a more deflationary approach to mental representation. This small survey includes popular characterizations in psychology (Sternberg, 2019; Gardner, 2011), neuroscience (Haier, 2023; Duncan, 2020) and AI itself (Russell, 2016; Brooks, 1995). Provisionally, the argument in this first section has the following form:

1) Intelligence requires mental representations.
2) Mental representation is a scientifically problematic concept.
3) Then, intelligence is a scientifically problematic concept. (But actual scientific views do not seem to require mental representations in any case.)

As a second step, I use the distinction between vehicle and content to problematize the causal efficacy of mental content, and argue, borrowing from Egan (2020), that contents and their causal/explanatory role are pragmatically but not metaphysically constrained. Additionally, I argue that, even assuming that representations in LLMs have some structural resemblance to their targets, which makes them good candidates of non-arbitrary content determination, they still suffer of potential causal inefficacy (Nirshberg, 2023) and, therefore, should be considered as mere epiphenomena (Baysan, 2021). Following this line of thought, I argue that even scientific methods like probing LLM representations rely on pragmatic attributions, as it has been argued in neuroscience regarding the use of probes for decoding mental representations (Cao, 2022). Such a caveat might allow us to think of mental representation as an important explanatory concept in AI’s behavior, without engaging with the metaphysical problems that the concept commonly implies, particularly regarding its causal efficacy. Consequently, a useful concept of intelligence should drop the causal requirement of contents without fully trivializing the role of representations in behavioral explanations. This second argument has roughly the following structure:

1) Intelligence requires mental contents to be causal explanations of behavior.
2) Mental contents are not causal explanations of behavior.
3) Then, intelligence is an incoherent concept. (Not even applicable to humans.)

Finally, I present some speculative considerations about a more deflationary concept of intelligence that may have the potential advantage of being scientifically productive and avoid severe metaphysical problems. The general conclusion of my paper is two-sided: it has a negative and a positive conclusion. On the negative side, I claim that, at least until we have a more robust account of mental representation in science and philosophy, we should think of intelligence without necessarily requiring mental representation or its causal powers. On a more positive side, we can evaluate intelligence in terms of more operational criteria, for example, behavioral success, mechanistic complexity and learning conditions, allowing us to take seriously the intelligence that we observe in generative systems. Such a conclusion does not imply that generative systems are categorically intelligent or even more intelligent than humans just because they behave as such, at least with respect to certain cognitive tasks. The mere fact that their mechanistic complexity is limited and not as efficient as the one of the human brain, along with the fact that such systems require vastly more instances and are less flexible in learning when compared to human children, are sufficient reasons to think of them in very relevant respects as less intelligent than humans. Nevertheless, LLMs invite rethinking a concept that, despite being vague, may be more scientifically productive by excluding unjustified anthropocentric requirements, and may have better scientific grounds by being operationally applicable to a wide range of systems, from simple and biological to complex and artificial ones.

References
Allen, C. (2017). On (not) defining cognition. Synthese, 194(11), 4233–4249. https://doi.org/10.1007/s11229-017-1454-4
Baysan, U. (2021). Rejecting epiphobia. Synthese, 199(1–2), 2773–2791. https://doi.org/10.1007/s11229-020-02911-w
Brooks, R. A. (1995). Intelligence without reason. In L. Steels & R. A. Brooks (Eds.), The artificial life route to artificial intelligence: Building embodied, situated agents (pp. 25–81). Lawrence Erlbaum Associates, Inc.
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. http://arxiv.org/abs/2303.12712
Cao, R. (2022). Putting representations to use. Synthese, 200(2). https://doi.org/10.1007/s11229-022-03522-3
Dretske, F. (1993). Can Intelligence Be Artificial? In An International Journal for Philosophy in the Analytic Tradition (Vol. 71, Issue 2).
Duncan, J., Assem, M., & Shashidhara, S. (2020). Integrated Intelligence from Distributed Brain Activity. In Trends in Cognitive Sciences (Vol. 24, Issue 10, pp. 838–852). Elsevier Ltd. https://doi.org/10.1016/j.tics.2020.06.012
Egan, Frances (2020). A Deflationary Account of Mental Representation. In Joulia Smortchkova, Krzysztof Dołrega & Tobias Schlicht (eds.), What Are Mental Representations? New York, NY, United States of America: Oxford University Press.
Facchin, M. (2023). Why can’t we say what cognition is (at least for the time being). Philosophy and the Mind Sciences, 4. https://doi.org/10.33735/phimisci.2023.9664
Gardner, H. (2011). Frames of mind: the theory of multiple intelligences. Basic Books.
Grzankowski, A. (2024). Real sparks of artificial intelligence and the importance of inner interpretability. Inquiry, 1–27. https://doi.org/10.1080/0020174X.2023.2296468
Haier, R. J. (2023). The neuroscience of intelligence. Cambridge University Press.
Li, K., Hopkins, A. K., Bau, D., Viégas, F., Pfister, H., & Wattenberg, M. (2022). Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv preprint arXiv:2210.13382.
Nirshberg, G. (2023). Structural Resemblance and the Causal Role of Content. Erkenntnis. https://doi.org/10.1007/s10670-023-00699-y
Ramsey, W. (2017). Must cognition be representational? Synthese, 194(11), 4197–4214. https://doi.org/10.1007/s11229-014-0644-6
Russell, S. (2016). Rationality and intelligence: A brief update. Fundamental issues of artificial intelligence, 7-28.
Sternberg, R. J. (2019). The Augmented Theory of Successful Intelligence. In The Cambridge Handbook of Intelligence. Cambridge University Press. https://doi.org/10.1017/9781108770422

  • Open access
  • 0 Reads
Intelligence as a second-order virtue: changing attitudes for successful interactions in digital environments.

From an instrumental perspective, intelligence is understood as a problem-solving ability or error avoidance (Holm-Hadulla et al. 2022). It is used to fulfil goals through the design and selection of good strategies or attitudes. Its success can be understood in terms of the interests and expectations of each person. But when referring to inquiry, in the works of authors like Kruglanski and Boyantki (in Matheson and Vitz, 2014) and Tanesini (2021), among others, two main goals are generally shared: social recognition—direct or indirect through other kinds of non-epistemic achievements—and truth-conduciveness—including avoiding falsehoods, and not only finding truths.

Intelligence makes use of our perspective (as a set of beliefs) and epistemic evaluations for choosing strategies, believing those intentions will match their objectives at least better than others. So true beliefs are part of intelligence, as they are doxastic attitudes not only considered an approximately true (or supposed as true) description of the states of things but considered to shape our perspective and evaluation of objects and propositions, like hypotheses and, with some differences, like emotions.

I will defend that intelligence is a second-order epistemic virtue because it allows epistemic subjects to correctly choose how to display their character traits to produce good effects, generally satisfying their own interests. In this, I follow a normative contextualism approach to traits, considering them virtuous or vicious depending on several situational factors, such as the kinds of motives or effects around the actions guided by the attitudes and decisions (Kidd et al. 2021, 82). Using conspiracist echo chambers as a case study, this talk will delve into how being inside an echo chamber creates a scenario where intelligence plays a fundamental role, as in our previous beliefs, in shaping strategies and attitudes for dealing with the problems of this epistemic and emotional environment.

Thus, this work offers support for character-based and decision-making theories as a constitutive part of studies about intelligence. For this purpose, it is essential to provide empirical evidence about the circumstances influencing the psychological processes, as well as their outcomes, that occur while being immersed in social media environments, which are, according to Levy and Mandelbaum (in Matheson and Vitz, 2014), a new set of environments, different from our past ones, for which humans are not well equipped.

The research question is as follows: how can people make smart choices about what to do when they find themselves inside an echo chamber? My thesis will be that intelligence is a second-order virtue, in the sense that every person has to display their traits as virtues while countering those traits that might become vices to avoid the production of bad effects. Intelligence's main functions, if so, consist in analysing the relationships between internal and external situational factors and coordinating actions for achieving goals in each situation.

Keywords: virtue epistemology; character traits; intelligence; decision theory; echo chambers

Philosophical issues addressed in the work: the debates between situationism and virtue epistemology, and inside the latter, between responsabilism and reliabilism. Also, I address issues related to echo chambers, such as the recognition of being inside one, how to deal with this situation, and which could be the appropriate actions, depending on internal and external factors.

  • Open access
  • 0 Reads
Breaking the Language Barrier: A Kripke-style model for unifying linguistic and non-linguistic reasoning

It is undeniable that the development of language gave humans an evolutionary advantage that no other species has. Studies in theoretical linguistics make bold claims that human creativity, complex social organisation, and capacity for mathematics may have arisen from the structures inherent in language (Chomsky 2005; Hinzen 2006). However, current research on animal and insect cognition shows that problem-solving, social complexity, and numerical abilities are also present in non-linguistic species (Vincenzo 2023). This situation requires a re-evaluation of the role language plays in shaping human reasoning. The first step towards this re-evaluation involves developing a theoretical framework capable of unifying linguistic and non-linguistic reasoning. In this essay, I will discuss how such a framework can be developed based on the classical Kripke-style knowledge model (Kripke 1959). I will also address the challenges for a unifying model that come from the need to explain cross-species cognitive parallels and differences. The key modification of the classical Kripke model discussed in this essay involves reworking the Hintikka-style accessibility relation between possibilities as represented by an agent (Hintikka 1962). The indexed accessibility relation proposed by Hintikka does not allow us to represent cross-species parallels and differences. Instead, we should, I argue, adopt a single accessibility relation and represent the parallels and differences in the relata—that is, the possibilities themselves (Stalnaker 2008, 2014).

References

Chomsky, Noam. 2005. “Three Factors in Language Design.” Linguistic Inquiry 36 (1): 1–22.

Hintikka, Jaakko. 1962. Knowledge and Belief: An Introduction to the Logic of the Two Notions. Ithaca, NY:

Cornell University Press. https://archive.org/details/knowledgebeliefi0000hint.

Hinzen, Wolfram. 2006. Mind Design and Minimal Syntax. Oxford: Oxford University Press.

Kripke, Saul. 1959. “A Completeness Theorem in Modal Logic.” Journal of Symbolic Logic 24 (1): 1–14.

http://www.jstor.org/stable/2964568.

Stalnaker, Robert C. 2008. Our Knowledge of the Internal World. Oxford: Oxford University Press.

———. 2014. Context. Oxford: Oxford University Press.

Vincenzo, Luca Di. 2023. “Theory of Mind in Non-Linguistic Animals: A Multimodal Approach.” PhD

thesis, Universita di Roma.

  • Open access
  • 0 Reads
From GenAI to Human Beings and Back: Assessing Creative Intelligence

This paper aims to provide a thorough understanding of intelligence, based on the pragmatic thought of C.S. Peirce. Generally speaking, pragmatism offers an intriguing perspective on both intelligence and technology. On one hand, it enables us to understand intelligence not in isolation, as if it were disconnected from practice. By starting with the analysis of intelligent behavior and procedures, pragmatism also aids in comprehending human intelligence in continuity with other forms of intelligent behavior that can be found in both living and non-living beings alike. On the other hand, concerning technology the pragmatist approach provides new “conceptual tools” for analyzing and understanding technology beyond the biases and heavy legacies that have characterized philosophy of technology in modern and recent times, such as the polar opposition between an “instrumentalist” view of technology and a “substantivist” one (see Borgoman 1984, Verbeek 2022). This opposition nowadays mirrors the public debates between techno-enthusiasts and technophobes. The former believe that AI, understood as a neutral instrument, may help overcome our limits and empower mankind’s inquiry in various domains. The latter are dominated by worries about the unpredictable and uncontrollable (bad) consequences that the spread of AI will have on our society and species.

The first part of the paper develops such a pragmatist approach and shows its advantages in comparison to current debates, underlying both its philosophical relevance and its interdisciplinary potentiality, in terms of applications. The second part tackles the specific case of GenAI to understand to what extent we should consider it creative and what the differences are with human creative intelligence, as it can be analyzed with reference to both scientific discoveries and art. In this regard, Peirce's distinction between three kinds of reasoning, namely deduction, induction, and abduction (the latter one being the only creative type, according to the American philosopher), will help determine limits and opportunities in the interactions between GenAI and human intelligence.

  • Open access
  • 0 Reads
Three Puzzles of Adaptivity: A Lens to Understand Definitions of Life, Cognition, and Intelligence

Adaptivity, the capacity to adjust in the face of perturbation, is a prerequisite for cognition. This assumption underlies prominent modelling approaches such as enactivism (Thompson, 2007) and active inference (Parr, Pezzulo, and Friston, 2022). Yet, despite its central role, adaptivity remains conceptually underdeveloped in these frameworks. I argue that models of cognition generally fail to recognise that adaptivity presents its own unique puzzles. By explicating these issues, I advance an approach that enables genuinely incorporating adaptivity into models of cognition and intelligence developed in cognitive science.

My main claim is that a rigorous account of adaptivity should address three core puzzles:

  1. The Puzzle of Identity: Who or what is adjusting to what?
  2. The Puzzle of Norms: What norms guide these adjustments?
  3. The Puzzle of Scope: What phenomena does adaptivity apply to?

Without addressing these questions, our attribution of adaptivity will be arbitrary. Living systems that adapt their behaviour would not be meaningfully different from rivers that “adapt” their flow or thermostats that “adapt” to the changing room temperature. To avoid such trivialisation of adaptivity, our investigation should focus on identifying candidate criteria and assessing their appropriateness. Thus, existing models involving the concept of adaptivity (e.g., “adaptive active inference”, Kirchhoff, Parr, et al., 2018) could be scored based on how well they resolve these puzzles. In this way, the puzzles can also be used as a guideline for formulating new modelling approaches.

Finally, by focusing on these conceptual puzzles, we are also in a more natural position to address an issue that has, to my knowledge, not been addressed in philosophy of cognitive science at all, namely, the origin of adaptivity. How did adaptivity emerge in evolutionary history? The answer to this simple question has profound consequences for our understanding of the evolution of intelligence and life in general. While it is often assumed that adaptivity is necessary for intelligence, it is not clear whether life requires adaptivity. Life without adaptivity is not just a conceptual possibility (Di Paolo, 2005)—it can also serve as a legitimate hypothesis about the origins of life (Frenkel-Pinter et al., 2021; Runnels et al. 2018).

References

Di Paolo, E. (2005). Autopoiesis, Adaptivity, Teleology, Agency. Phenomenology and the Cognitive Sciences, 4(4), 429–452. https://doi.org/10.1007/s11097-005-9002-y

Di Paolo, E., Thompson, E., & Beer, R. (2022). Laying down a forking path: Tensions between enaction and the free energy principle. Philosophy and the Mind Sciences, 3. https://doi.org/10.33735/phimisci.2022.9187

Frenkel-Pinter, M., Rajaei, V., Glass, J. B., Hud, N. V., & Williams, L. D. (2021). Water and Life: The Medium is the Message. Journal of Molecular Evolution, 89(1–2), 2–11. https://doi.org/10.1007/s00239-020-09978-6

Kirchhoff, M., Parr, T., Palacios, E., Friston, K., & Kiverstein, J. (2018). The Markov blankets of life: Autonomy, active inference and the free energy principle. Journal of The Royal Society Interface, 15(138), 20170792. https://doi.org/10.1098/rsif.2017.0792

Parr, T., Pezzulo, G., & Friston, K. (2022). Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. The MIT Press.

Runnels, C. M., Lanier, K. A., Williams, J. K., Bowman, J. C., Petrov, A. S., Hud, N. V., & Williams, L. D. (2018). Folding, Assembly, and Persistence: The Essential Nature and Origins of Biopolymers. Journal of Molecular Evolution, 86(9), 598–610. https://doi.org/10.1007/s00239-018-9876-2

Thompson, E. (2007). Mind in Life: Biology, Phenomenology, and The Sciences of Mind. Belknap Press of Harvard University Press.

  • Open access
  • 0 Reads
Towards beneficial AI: biomimicry framework to design intelligence cooperating with biological entities

This paper revisits the origins of artificial intelligence (AI) in a biomimicry/bio-inspired design framework as an approach to accelerating the development of beneficial AI. Currently, the issue of designing long-term security in and ensuring benefits from AI systems is one of the most important. Just as AI emerged from learning from and mimicking biological systems, its further development can benefit from models in biology. The ideas here are based on the philosophical grounds of natural computing (e.g. Dodig-Crnkovic 2020, 2022). A first sketch of the model has already been proposed (Polak & Krzanowski, 2023).

Biomimicry—drawing on nature’s evolutionary solutions—has long influenced disciplines such as material science, robotics, and computing. In the realm of AI, this project proposes a broader biological lens—embracing intelligence as it emerges across a wide variety of organisms, from microbial colonies to complex mammalian nervous systems.

The proposed investigation centers on several interrelated themes. The first concerns expanding AI’s sensory capabilities by emulating mechanisms that exceed human perception—such as echolocation, chemical sensing, or distributed tactile sensitivity. A second theme focuses on swarm intelligence and multi-agent systems, drawing lessons from collective behavior to improve the coordination and emergent problem-solving among AI agents.

A third track explores alternative and hybrid computational architectures, inspired by biochemical and physiological processes.

We propose including the study of biosemiotics, or how living systems use signs and symbols to interpret their environments, opening the door to more context-aware and symbolically grounded embedded AI systems, as well as different mechanisms for generating intelligence.

Ultimately, it is necessary to study and model the relationships between biological organisms in order to be able to harmoniously integrate AI into existing networks of biological relationships and shape them in the desired way. This paper seeks to build a multidisciplinary framework for biomimicry-inspired beneficial AI, enabling new forms of cognition, perception, and resilience. By modeling the diversity and adaptability of biological intelligence, this project aspires to develop AI systems that are not only more powerful and efficient but also more robust, self-regulating, and ecologically integrated.

Bibliography

Dodig-Crnkovic, G., 2020. Natural Morphological Computation as Foundation of Learning to Learn in Humans, Other Living Organisms, and Intelligent Machines. Philosophies, 5(3), pp.17–32.

Dodig-Crnkovic, G., 2022. In search of common, information-processing, agency-based framework for anthropogenic, biogenic, and abiotic cognition and intelligence. Philosophical Problems in Science, (73), pp.17–46.

Polak, P. and Krzanowski, R., 2023. How to Tame Artificial Intelligence? A Symbiotic AI Model for Beneficial AI. Ethos 36(3(143)), pp.92–106.

  • Open access
  • 0 Reads
Intelligence as Typological Cognition: Revisiting Jungian Functions for Human and Artificial Minds

Conventional definitions of intelligence have been shaped by a rationalist tradition, where intelligence is reduced to logical reasoning, information processing, and performance optimization. In this view, intelligence is often equated with a monolithic thinking function. However, this understanding overlooks both the diversity and the inner tensions within human cognition. Recent psychological and neurocognitive research supports a more complex view: Daniel Goleman’s Emotional Intelligence theory highlights emotional regulation as integral to intelligence; Daniel Kahneman’s dual-process model shows that intuitive, fast thinking is pervasive in reasoning; and Dario Nardi’s neurophysiological studies identify differentiated brain activity patterns corresponding to Jungian functions.

Carl Jung’s theory of psychological types provides an early and profound contribution to this line of research. Jung proposed that cognition is structured not only by four basic functions—Thinking, Feeling, Intuition, and Sensation—but also by two opposing orientations: introversion (inner-directed) and extraversion (outer-directed). Crucially, Jung argued that thinking itself can be divided into Introverted Thinking (Ti), which is concerned with internal coherence and system-building, and Extraverted Thinking (Te), which is focused on external data and pragmatic results. Moreover, Feeling is conceptualized as a rational function that evaluates situations based on subjective values (Fi) or social harmony (Fe). The tension between introverted and extraverted orientations means that developing one function tends to suppress its opposite. For instance, a Ti-dominant person may sacrifice practical results for internal conceptual clarity, while a Te-dominant agent may disregard subjective depth for external efficiency. This typological framework reveals that intelligence is never uniform—it is informed by differentiated and sometimes conflicting cognitive functions, each shaping distinct ways of engaging with reality or constructing inner cognitive systems. Intelligence, from this perspective, is never uniform or neutral—it is always selective, structured, and shaped by competing cognitive priorities.

Jung’s eight-function model offers a compelling map of intelligence as a multi-dimensional phenomenon, with each function corresponding to a distinct mode of information processing and world engagement: four rational/judging functions: logicalanalytic (Ti, introverted thinking), pragmaticexecutive (Te, extraverted thinking), subjective value-based (Fi, introverted feeling), social-relational (Fe, extraverted feeling), and four irrational/ perceiving functions: intuitivevisionary (Ni, introverted intuition), creativedivergent (Ne, extraverted intuition), experiential memory-based (Si, introverted sensing), and perceptual action-oriented (Se, extraverted sensing).

These functions shape not only individual personality but also collective human knowledge production, including philosophy, science, and technology. No theory or worldview is purely objective; all cognition operates through functional preferences and typological biases. This insight is especially relevant for Artificial Intelligence (AI), as AI systems inherit human cognitive biases through data input, models, and design logic.

From this perspective, intelligence is not merely a computational capacity but a multi-faceted construct. Current AI development shows clear functional imbalances: e.g., while excelling in Te-like data optimization and Se-like sensory processing, AI systems remain underdeveloped in Fi-style ethical judgment and Si-style experiential learning. For both human cognitive research and AI development, typological cognition offers a necessary tool for better understanding cognitive diversity, developing all-round intelligence, and guiding future AI towards more human-like, ethical, and context-sensitive architectures.

  • Open access
  • 0 Reads
Proto-Intelligent Inquiry

At the frontier of knowledge, we feel stupid about a given subject, but on the edge of inquiry we also feel stupid about how to proceed—how to choose the next step towards discovery (Schwartz, 2008). This talk introduces Meta-Creative Problem-Solving (MCPS), a new concept designed to study and improve inquiry practices and norms as they become intelligent.

To uncover knowledge, we search for invariances between epistemic activities (e.g., proposed definitions, models, theories; operationalization of methods; established goals and cultivated values) and the subject of inquiry. The creative part of MCPS operates through directed integration: generating variations within epistemic activities and evaluating consequent changes in relation to the subject of inquiry. The metacognitive dimension of MCPS recognizes that both monitoring and controlling epistemic activities are themselves (meta)epistemic activities. The apparent infinite regress of meta-approaches (i.e., if every epistemic change is guided by another epistemic act, where does it end?) is resolved, i.a., through metacognitive feelings like confidence and fluency (Koriat et al., 2008). These feelings integrate causal information from both present and past problem-solving relationships between epistemic activities and subject of inquiry (Shea, 2023, 2024), guiding complex problem-solving (Ackerman, 2019; Rudolph et al., 2017). They function as half-baked conjectures, preliminary knowledge structures guiding inquiry before formal hypotheses emerge, and on the meta-level—as half-baked heuristics, preliminary methodological structures guiding inquiry before norms or virtues develop.

We don't merely solve problems; we discover how to do inquiry itself (Chang, 2022), originating and developing methods, norms, and frames of thinking. The MCPS concept extends Nersessian's (2008) insight that scientific models serve as cognitive artifacts embodying assumptions whose consequences guide model refinement. Similarly, it builds upon Boden's (1991) observation that breakthrough problem-solving often requires reformulating the problem itself. The Meta-Creative Problem-Solving concept generalizes these approaches by recognizing that (meta)epistemic practices and norms themselves embody assumptions whose consequences guide their own refinement. This self-consistency means that during reflection, we may change even the epistemic practices and norms of subsequent reflection, which we can evaluate in light of the subject of inquiry, amid metacognitive feelings crucial at the fractal beginnings of knowledge and inquiry.

The concept of MCPS can help understand—and guide—inquiry practices and norms when no ready-made roadmap exists. This talk will further clarify MCPS through the example of the first mathematical discovery with LLMs (Romera-Paredes et al., 2024) and show how MCPS can guide the development of inquiry norms and practices to integrate theory-driven scientists with data-driven causal discovery algorithms (Andersen, 2024; Petersen et al., 2023; Runge et al., 2023). The concept of Meta-Creative Problem-Solving takes up the challenge of answering how inquiry becomes intelligent.

References:

Ackerman, R. (2019). Heuristic Cues for Meta-Reasoning Judgments: Review and Methodology. Psihologijske Teme, 28(1), 1–20.

Andersen, H. K. (2024). Why adoption of causal modeling methods requires some metaphysics. The Routledge Handbook of Causality and Causal Methods, 87-98.

Boden, M. A. (1991). The creative mind: Myths & mechanisms, Basic Books.

Chang, H. (2022). Realism for Realistic People. Cambridge University Press.

Koriat, A., Nussinson, R., Bless, H., & Shaked, N. (2008). Information-based and experience-based metacognitive judgments: Evidence from subjective confidence. Handbook of Metamemory and Memory, 117–135.

Nersessian, N. J. (2008). Creating scientific concepts. MIT Press.

Petersen, A. H., Ekstrøm, C. T., Spirtes, P., & Osler, M. (2023). Constructing causal life-course models: Comparative study of data-driven and theory-driven approaches. American Journal of Epidemiology, 192(11), 1917–1927.

Romera-Paredes, B., Barekatain, M., Novikov, A., Balog, M., Kumar, M. P., Dupont, E., Ruiz, F. J. R., Ellenberg, J. S., Wang, P., Fawzi, O., Kohli, P., & Fawzi, A. (2024). Mathematical discoveries from program search with large language models. Nature, 625(7995), Article 7995.

Rudolph, J., Niepel, C., Greiff, S., Goldhammer, F., & Kröner, S. (2017). Metacognitive confidence judgments and their link to complex problem solving. Intelligence, 63, 1–8.

Runge, J., Gerhardus, A., Varando, G., Eyring, V., & Camps-Valls, G. (2023). Causal inference for time series. Nature Reviews Earth & Environment, 4(7), 487–505.

Schwartz, M. A. (2008). The importance of stupidity in scientific research. Journal of Cell Science, 121(11), 1771–1771.

Shea, N. (2024). Metacognition of Inferential Transitions. Journal of Philosophy, 121(11), 597-627.

Shea, N. (2024). Concepts at the Interface. Oxford University Press.

  • Open access
  • 0 Reads
The Threat to Human Intelligence From Today's AI

Reflections upon the nature of today’s Artificial Intelligence (AI), on the one hand, and human intelligence (HI), on the other, are now intertwined, catalyzed by the stunning increase in the breadth and speed of artificial agents produced (mostly) by for-profit corporations, and used by the many humans who pay for the privilege of using (the larger, more intelligent(?) versions of) these agents. Unfortunately, because of the particular nature of this AI, this reflection constitutes an attack on HI. This is so because the basis for today’s ascension of AI is a form of venerated machine learning (ML) inconsistent with a large part of what, undeniably, distinguishes HI, and has done so since the dawn of recorded history. Specifically, the sub-type of ML known as “deep learning” (DL) is fast reducing the received conception of HI to the sub-human level, while at the same time audaciously raising the banner of “foundation models” over today’s AI to signal the subsuming of all of HI (see, e.g., Agüera Y Arcas, B. & Norvig 2023). Humans have long been distinguished by their ability to create and assess chains of logical reasoning—deductive, analogical, inductive, abductive—over internally stored, structured, declarative knowledge, in order to produce conclusions, themselves structured and declarative. But AI qua DL literally has no such knowledge in the first place: there is nowhere in an artificial deep neural network (DNN) where such knowledge resides (see, e.g., Russell & Norvig 2020; and for a lively, lucid treatment of the issue, see Wolfram 2023). Ironically, none of the corporations such as OpenAI that sell and promote DL could survive for a week without humans therein employing logical reasoning on a broad scale. Genuine HI, for a simple but telling example, has for millennia consisted in part in being able to (i) memorize the algorithm of long multiplication over two whole numbers, (ii) apply it to compute the function of multiplication over, for instance, 23 and 5 (to yield 115), and (iii) certify as correct and gauge, logically, how feasible this algorithm is in the general case. AI qua DL does not have even this intelligence, since no algorithms whatsoever are stored in a DNN; and as a matter of settled science, DNNs only approximate the computing of functions. Hence, since HI is increasingly identified with DL (and human minds identified with DNNs), HI is regarded to be dramatically lower than what it is. In short, today’s DNN-based GenAI, itself painfully illogical (see, e.g., Arkoudas 2023a, 2023b), portrays HI as constitutionally illogical. This is a growing cancer that philosophy as a field must uncover, and seek to heal, or at least mitigate. Our ability to engage in logical reasoning is in large part what sets us apart from all other natural species, and this hallmark, while under attack today, must be protected. The need to do so is all the more dire because Artificial General Intelligence (AGI), taken to have HI as its initial target (before aiming at superhuman intelligence), is defined as being devoid of logical reasoning (see, e.g., the entirely statistical Legg & Hutter 2007).

I end by (briefly) offering and defending a course of action to address the threat to HI.

References

Agüera Y Arcas, B. & Norvig, P. (2023) “Artificial General Intelligence is Already Here” Noema, October.

Arkoudas, K. (2023a) “ChatGPT is No Stochastic Parrot. But It Also Claims That 1 is Greater than 1” Philosophy & Technology 36.3: 1–19.

Arkoudas, K. (2023b) “GPT-4 Can’t Reason: Addendum” Medium.

https://medium.com/@konstantine_45825/gpt-4-cant-reason-addendum-ed79d8452d44

Russell, S. & Norvig, P. (2020) Artificial Intelligence: A Modern Approach (New York, NY: Pearson).

Legg, S. & Hutter, M. (2007) “Universal Intelligence: A Definition of Machine Intelligence” Minds and Machines 17.4: 391-444.

Wolfram, S. (2023) “What is ChatGPT Doing … and Why Does It Work?”

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work

Philosophical Issues

Such issues are myriad in the present case. The nature of intelligence—artificial, natural/human, supernatural/superhuman—must be engaged. A brief intellectual history, rooted in philosophy, must be provided for context. The growing “science of intelligence” within the fields of AI and AGI must be analyzed philosophically. The analysis of how humans learn must be compared with the analysis of how ML (DL in particular; and RL) systems “learn.” Substantiation as to how today’s “GenAI” seriously limits intelligence in both AI and HI, and threatens the latter, must be provided by argument. The presentation/paper concludes with a proposed solution for improving the current state of affairs.

  • Open access
  • 0 Reads
Under the reign of AI: a new 21st century taste dispute? Reflections about Ian Hamilton Finlay's Little Sparta and John Goto's High Summer.

In the 18th century, significant discussions arose under the aegis of what became known as the "Discussion of Taste". This included Francis Hutcheson [An Inquiry into the Original of Our Ideas of Beauty and Virtue (1725)], David Hume (On the Standard of Taste, 1757/1760), Edmund Burke (A Philosophical Enquiry into the Origin of our Ideas of the Sublime and the Beautiful, 1759), and E. Kant' "Antinomy of Taste" (Critic of Judgement, 1790), who stand out (among other philosophers and authors), without forgetting the visionary thought of Mme. de Lambert, Anne-Thérèse Marguenat de Courcelles (1647 – 1733), or the article on the concept of “Taste”, in the Encyclopédie (1757) that was written collectively by Jaucourt, Voltaire, Montesquieu, Diderot, d’Alembert, Blondel, Rousseau, and Landois (See Volume 7, pp. 758-77). As the definition of “novelty” proposed by Hume was consolidated, the artistic creation developed in the West sought to assert itself in the territory of the “original” to be homonymous with “unprecedented” and “experimental”. Taste, as an aesthetic standard, implied the search for recognized and canonical stipulations, while at the same time being adorned with countless idealizations. In analyzing the digital applications in photography conceived by John Goto that affect eighteenth-century Oxfordian Gardens (http://www.johngoto.org.uk/essays/High_Summer_text.htm), the articulation emerged under the auspices of the expanded and sovereign norm of taste. But of extreme critical and ironic acuity, it must be emphasized! On the other hand, by signaling the (almost) megalomenal project embodied in the Garden landscape titled Little Sparta (after 1966 at Dunsyre in the Pentland Hills in South Lanarkshire, Scotland - https://www.littlesparta.org.uk/), by Ian Hamilton Finlay and Sue Finlay, a complex and timeless work, another term (Gadamer or Schiller) is incorporated into this discussion, enhancing the thinking of artist-authors who today revise the mentalities of the past. When experimentation was initiated using image-prompting tools on open-access AI platforms, applied to Goto's High Summer (1996-2001) series (from the larger "Ukadia" project), distinct aesthetic tastes were identified that could be shaped according to the prompts, but were limited to a certain stereotypical and "pre-defined" taste that was not entirely plausible to master. When approaching theories of taste in Ian Finlay's work, the research in process, "brief narratives" similar to those applied in the "prompt into image" approach in Goto's work are applied. The sections were identified in the conceived aesthetic--historical landscape—in this case, three-dimensional and spatialized in itself—and not shaped into photographic molds as a beginning and an end in itself, as John Goto principles were directed. In both study cases, aesthetic gardens are a common denominator that was studied under the aesthetics of taste, artistic standards, and ideological issues, though more or less covered in certain specific cases, as will be underlined in this oral presentation. A study methodology emerged and was undertaken, concerning the parks and gardens approached by John Goto, and, as for Little Sparta, Ian Finlay's installation was seen at São Paulo Biennale (2012); in both cases it has been possible to experience this in person in recent decades. We may wonder about the subjectivity and/or objectivity of taste, oscillating between Hume and Kant, but how can it be equated by AI standards of taste, its options, or commands? Will we have unnumerable quarrels of taste, so many as the uncountable number of people that can access AI platforms and pursue an individual desired creation of an “inner” or “outsider” image generated after a subjective narrative of a real or imaginary view, feeling, thought, or intuition insight. We can evaluate the generated image as awful or as agreeable, even or though it is not similar or near the mental imagery or description of a painting or photography. We can consider the image beautiful, very beautiful, or sublime: so we step into Kant' statements; we travel between objectivity and subjectivity. It might seem unbearable, and once again the debate is/will be overwhelming! Are we able to shape, more than two centuries after, other or newer aesthetic categories induced by AI-generated images if proportionated by our reading/ seeing/describing/directing throughout words the images created by contemporary artists such Ian Finlay and John Goto?

Top