Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
A Comparison of the effect of language on high-level information processes in humans and linguistically mature generative AI

Short abstract

The phenomenal progress of generative AI in recent years, and Large Language Models in particular, reignites discussions of the similarities and differences between human and machine intelligence, with, at their apex, the question of whether such systems are, or have the potential to become, conscious agents. This article approaches these questions from the viewpoint of the overarching explanation for biological and technological information systems provided by Emergent Information Theory. Particular focus is given to the significance of language as a modelling system for internal information storage and processing, and to language-based information transfer between intelligent entities. This approach illuminates the strong ontogenetic and ontological convergence between human intelligence and this new form of AI, but also their remaining and fundamental differences. With respect to the ultimate philosophical question, the conclusion drawn is that while such systems may support consciousness-like phenomena, this is not directly comparable to human phenomenal consciousness.

Extended abstract

Philosophical consideration of the possibility of machine consciousness has a long history, as famously embodied in Turing’s “Imitation Game” [1]. In what was at the time a thought experiment, it was proposed that if the textual responses of a machine were to be indistinguishable from a thinking human we may need to acknowledge that the machine is also thinking.

The development of digital technologies since then have progressively transformed these musings into practical considerations. Computers running programs designed and constructed by humans can produce coherent texts, but the way in which these texts are generated justifies their classification as mere deterministic machines. The instruction PRINT “I am a thinking machine” does not make a thinking machine. Even the first generations of artificial neural networks, which developed functions through machine learning rather than design and construction, do not constitute a serious challenge. While they use deep neural networks that bear functional similarities to biological neural networks, supervised learning towards a predetermined task such as text recognition [2] can be considered little more than an alternative method of constructing a machine with a required informational function.

Where things start to become less clear is with unsupervised learning. Such systems are not given a predetermined input--output relationship, but asked to autonomously find patterns in input data: a process more similar to the thought processes of a human faced with a novel situation requiring analysis. From simple beginnings [3]. such systems have progressed to becoming increasingly similar to biological neural networks [4]. However, such experimental models are generally fed with demarcated input data sets such as the MNIST grayscale image [5], and applications are usually still focused on a pre-specified problem such as analysis of medical data on tumor growth [6]. Such systems therefore still seem far from deserving acknowledgement as ‘thinking machines’.

A whole new chapter in this book has been opened by the recent developments in generative AI, and Large Language Models (LLMs) in particular. While still requiring some form of input (generally a human-derived text prompt) to initiate a response, the output generated demonstrates a significant degree of autonomous creativity. The immense scale and diversity of their training sets allows them to produce convincing responses to almost any question imaginable; their linguistic capabilities allow this to be presented in accessible natural language; and their probabilistic nature prevents the kind of deterministic duplication that was previously a hallmark of digital systems.

Returning to Turing’s Imitation Game, these characteristics of modern LLMs bring them considerably closer to being indistinguishable from a human [7], leading to the following question: to what degree is an LLM that speaks the words “I am a thinking machine” more convincing than a computer program written to output this sequence of characters? This question, and its ethical consequences, have seen renewed interest in popular press [8] and academic circles [9]. However, we may ask ourselves whether this is the relevant question to be asking. The different origins and natures of these systems and the human brain means that this can be considered a distraction from the following deeper question: What the nature is of the higher-level informational entities and processes existing in LLMs?

This article will consider the impact of the use of broad-based symbolic language by LLMs on their higher-level information processes from the viewpoint of Emergent Information Theory [10]. The fact that this relatively new theory provides a generic theoretical framework for both biological and technological information-based systems allows for more direct comparison between the two. Specifically, the question will be placed within the context of the long history of philosophical consideration of the relationship between language, thought, and consciousness in humans [11]. To what extent do generic linguistic abilities over a broad knowledge base, coupled with probabilistic response generation, lead to the type of autonomous creation of conceptual content that can justifiably be characterized as thought? Taking a step further, what means do we have at our disposal of confirming or disproving the existence within these systems of the qualia of phenomenal consciousness?

References

[1] A. Turing, “Computing machinery and intelligence,” Mind, vol. LIX, no. 236, p. 433–460, 1950.

[2] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard and L. D. Jackel, “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation., vol. 1, no. 4, p. 541–551, 1989.

[3] T. Kohonen, “Self-organized formation of topologically correct feature maps,” Biological cybernetics, vol. 43, no. 1, pp. 59-69, 1982 .

[4] N. Ravichandran, A. Lansner and P. Herman, “Unsupervised representation learningwith Hebbian synaptic and structural plasticity inbrain-like feedforward neural networks,” Neurocomputing, vol. 626, p. 129440, 2025.

[5] P. Grother and K. Hanaoka, “NIST special database 19. Handprinted forms and characters database,” National Institute of Standards and Technology, vol. 10, no. 69, 1995.

[6] C. Strack, K. Pomykala, H. Schlemmer, J. Egger and J. Kleesiek, ““A net for everyone”: fully personalized and unsupervised neural networks trained with longitudinal data from a single patient,” BMC Medical Imaging, vol. 23, no. 1, p. 174, 2023.

[7] C. Jones and B. Bergen, “Does GPT-4 pass the Turing test?,” in Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024.

[8] D. Milmo, “AI systems could be ‘caused to suffer’ if consciousness achieved, says research,” The Guardian, 3 2 2025. [Online]. Available: https://www.theguardian.com/technology/2025/feb/03/ai-systems-could-be-caused-to-suffer-if-consciousness-achieved-says-research. [Accessed 114 2 2025].

[9] D. Chalmers, “Could a large language model be conscious?,” arXiv preprint, p. 2303.07103, 2023.

[10] D. Boyd, Existing in the Information Dimension: An Introduction to Emergent Information Theory, London: Routledge, 2024.

[11] P. Carruthers, Language, thought and consciousness: An essay in philosophical psychology, Cambridge: Cambridge University Press, 1998

  • Open access
  • 0 Reads
Thought Structures and their Morphisms

Structure-preserving maps (morphisms) are ubiquitous in mathematics. They are used as analogies and as tools for transforming problems into easier-to-solve formats. We argue that the usefulness of morphisms has a bigger scope. Mathemat- ics has unintentionally developed a grand theory of understanding and explanation since finding compatible operations across domains is a universal mechanism of any type of intelligence. Formally, morphisms are described by the equation

where · and ∗ are the two operations in the different domains. We can choose to transfer the inputs and carry out the computation by ∗ or compute with · first, and then transfer the result. If there is a morphic relation, then it does not matter where we perform the operations, as they are compatible.

An archetypical example is a cartographic map, but its geometric or topologi- cal features can be generalized. We define thought structures using directed graphs where an edge represents passing from one thought to another. The type of connec- tion does not matter much for this all-encompassing definition. It can be a random or guided association, a memory recall, or, in special cases, logical inference. Un- derstanding is then some (partial) morphism between thought structures or other systems. Thinking is compositional (as in category theory), and understanding is morphic (as in algebra). Compatible operations guarantee some useful similarity between the domains.

This argument is not entirely new. Any modeling relationship implies a morphic relation. Conceptual metaphors in cognitive linguistics are also based on the same mechanism. However, there are several misconceptions about the usefulness of morphisms: the seeming rigidity of the mathematical definition (the domains have to be fully defined) ruling out creativity, focusing on isomorphisms (1:1 scale maps) and not taking advantage of information reduction, and missing compositionality by using n-ary relations.

The morphic theory of understanding applies both to enhancing natural intel- ligence by deliberately creating and maintaining morphic relations for thinking, and to benchmarking artificial intelligence by looking for morphic representations. In this talk, we will demonstrate how the explicit formulation of morphisms im- prove explanations and describe these future applications. Further information and references can be found in the preprint at https://arxiv.org/abs/2411.06806.

  • Open access
  • 0 Reads
Intelligent behaviour as adaptive control guided by accurate prediction

What is intelligence and why is it needed? Current research in the cognitive and life sciences presents only fragmented views on this question. We examine the recent proposal that intelligence can be understood as accurate prediction, and develop it behaviourally to include adaptive control. We argue that this framework, albeit already applied to diverse domains of intelligence research, needs conceptual clarification. We specifically refine the notions of “accuracy” in terms of practically exploitable representation and “prediction” in terms of action-oriented probabilistic inferences at the subpersonal level of information processing. We then argue that this interpretation does not fully explain the mechanisms of intelligence in artificial systems, but that it nevertheless provides a useful analysis. Firstly, this view allows us to demarcate cognition from intelligence, the latter of which comes out as a more sophisticated form of efficient and robust information processing that may be realised in systems that are non-cognitive (e.g., bacteria and large-language models). Secondly, this view provides a unified platform for researchers from different disciplines to integrate their diverse theoretical perspectives, share fragmented data, and effectively discover intelligence in non-human systems. We also explore some obstacles and avenues to scaling the view up to adequately capture intelligence in social and collective systems.

Our framework ties the different notions of intelligence together in a natural way by integrating them into a control theory framework of adaptive control and model-based reinforcement learning. We posit that memory, knowledge, experience, and understanding can be interpreted as having to do with an adaptive controller’s predictive model, where memory pertains to the ability to store sensory traces, knowledge indicates that memory traces are structured and consolidated to afford effective prediction, and experience indicates the sensory traces in the context of an agent being situated and continuously sampling the world, learning both its environment’s structure and statistics, as well as the statistics of its own body. So long as it is the case that there is uncertainty both about the state of the environment and which behaviour best achieves the agent’s goals, the agent needs to make decisions and choices based on a value system where some outcomes are better or worse than others. Together, these constitute the ability of the agent to make judgements, weighting and prioritising behaviours and outcomes in relation to its internal needs.

  • Open access
  • 0 Reads
Jean Piaget and Objectivity—Genetic Epistemology’s Place in a View from Nowhere

The most natural expression of the aim of objective knowledge according to Thomas Nagel is ‘we must get outside of ourselves, and view the world from nowhere within it’ [1]. Taken literally, however, he finds it unintelligible. Realistically, we cannot in fact get outside of ourselves; we can only hope to achieve a more detached conception by relying ‘less and less on certain individual aspects, and more and more on something else, less individual, which is also part of us’ [1]. This ‘self-transcendent conception should ideally explain (1) what the world is like; (2) what we are like; (3) why the world appears to beings like us in certain respects as it is and in certain respects as it isn't; (4) how beings like us can arrive at such a conception.’ [1]
Nagel’s self-transcendent conception of objective knowledge is dynamic, involving a dialectic between changes in knowledge of (1) and (2) to form an explanation of (3). In modern times, epistemological problems often occur due to advancements in the human sciences; however, Nagel laments that (4) is often lacking: ‘We tend to use our rational capacities to construct theories, without at the same time constructing epistemological accounts of how those capacities work. Nevertheless, this is an important part of objectivity. What we want is to reach a position as independent as possible of who we are and where we started, but a position that can also explain how we got there.’ [1, e.g., 2]
Jean Piaget’s research, especially his research on the psychogenesis of concepts, contributed to (2); however, this was done to epistemological ends. He conceived of a scientific epistemology founded solely on development, and the psycho- and historiogenesis of knowledge were its methodological pillars. It is known as ‘genetic epistemology’ but represented a research programme rather than a finished theory [3]. Nevertheless, after decades of research, Piaget concluded that objective knowledge is a process rather than a state [4]. More importantly for the present purposes, however, genetic epistemology also provided an explanation for (4).
In this paper, I argue that Piaget explains how beings like us develop rational capacities to construct theories. Having first given my reasons for choosing Nagel’s conception of objectivity in the Introduction, I proceed to characterise ‘theory’ and show that the concept has both logical and algebraic descriptions under Logical and Algebraic Descriptions of Theories. Under Development of Our Rational Capacities, I then briefly sketch Piaget’s account of how our rational capacities develop. This development culminates in hypothetico-deductive thought, and, under The Structure of Hypothetico-Deductive Thought, I set out the interpropositional grouping, which characterises our rational capacities at this stage of development. Under The Interpropositional Grouping: A Canonical Theory, I compare the interpropositional grouping with the algebraic description of theories and conclude that the interpropositional grouping actually represents an archetypical theory. Finally, I conclude by briefly summarising my findings before locating genetic epistemology in a view from nowhere.
References
1. Nagel, Thomas. 1986. The View From Nowhere. New York; Oxford: Oxford University Press, USA.
2. Johnson-Laird, Philip N. 2008. How we Reason - Oxford Scholarship. October 23.
3. Piaget, Jean. 1950. Introduction à l’épistémologie génétique. (I) La pensée mathématique. Electronic version from Fondation Jean Piaget pour recherches psychologiques et épistémologiques. Pagination according to 1st edition 1950. Vol. 1. 3 vols. Paris: Presses Universitaires de France.
4. Piaget, Jean, and Rolando Garcia. 1989. Psychogenesis and the History of Science. Translated by Helga Feider. New York: Columbia University Press.

  • Open access
  • 0 Reads
Body as Anti-Anthropomorphic Landscape: Natural-Born Intelligence in Bog Body

A bog body is the general term for bodies excavated from peat bogs across northern Europe. Many bog bodies are from B.C. periods and are thought to have been in a position of noble standing, such as an ancient chief or “king” [Fischer 2007]. In one instance, to overcome difficult circumstances, such as a famine, a king was killed by his subjects and his body buried in a bog and offered to a goddess [Joy 2009]. This was because the king was married to the goddess of fertility, and the famine was thought to be the result of the goddess's displeasure due to the king's bad behavior. Therefore, to appease the goddess and to pray for a good harvest, the king was sacrificed to the goddess [Kelly 2006]. A wooden stick figure called a pole god was also excavated from the bog.

We found the logical structure of a “Traumatic structure”, specifically “Natural Born Intelligence (NBI)”, in the meaning of a bog body, which shows the knowledge and intelligence used by ancient people to overcome their current situation, or in other words, creativity. A traumatic structure is a logical structure in which there is a positive antinomy in which both A and B are true and a negative antinomy in which neither A nor B are true at the same time, thereby making both outside the antinomy [Gunji, 2019, Gunji & Nakamura 2022]. This structure can be demonstrated by the spatial concept found in old Japanese paintings, such as in many Rimpa paintings. Rimpa depict seemingly childish mountain ranges as a series of semicircles that resemble a graphic plane. Nakamura and Gunji named the flat plane-like representation of the mountains the “Kakiwari” structure, after the name for a stage backdrop in Japanese [Nakamura & Gunji 2018, Nakamura 2021]. A Kakiwari is a painting of a close-up view and a distant view on a wooden board and is therefore a positive antinomy of both, but at the same time, by invalidating the full use of metric space, it negates the meaning of the concepts of close-up and distant views, thereby constructing a negative antinomy. This is the traumatic structure.

 The bog body belongs to a king, who mediates between the human world and the world of the gods and is therefore a positive antinomy between the two, but in the event of a famine, he is killed and negated, forming a negative antinomy between the two. This is a way of accessing an outside world that is different from the world of the gods we normally think of and of stopping the famine. It is therefore a traumatic structure, and by developing it as a Kakiwari, it can be implemented as a pictorial expression.

We also went to the discovery site of the Tollund man, a typical bog body, and based on the landscape we attempted to implement this model of creativity by developing the negative antinomy into artwork. Nakamura painted “Body as Anti-Anthropomorphic Landscape,” a series of paintings in which she found a Kakiwari structure in the landscape around the bog. Gunji created an installation artwork using bog plants and his body.

  • Open access
  • 0 Reads
Natural Born Intelligence that summons <emotions=politics>

Intelligence and life have been understood in most cases within the framework of artificial intelligence (AI). The framework of AI here is a framework that considers the relationship between two qualitatively different concepts as understanding. Autopoiesis, a model of life, relates the inside and outside of a system in a self-referential way, and the brain, a model of intelligence, is understood as the center that relates various parts to the whole. AI is a type of machine learning that relates unorganized data to organized data. What about swarm intelligence? There too, collective behavior relates individuals to society. All of them are merely restatements of self-reference that relate parts to the whole, and inside to the outside. There is no creativity to go outside these original two concepts.

Emotions can be thought of as politics that suppress individual processing at the lower level through top-down commands. This emergence of emotion = politics does not occur in the framework of AI. Bateson described the gentle bite in animal society as a negation of aggression. An attack is an intense relationship between two logical concepts, enemy and ally. If this were all, it would be nothing more than a variation of self-reference. However, here, there is both an attack and a negation of the attack. If an attack is not presupposed, a soft bite never means a negation of the attack. In other words, a gentle bite is an attack that accepts and relates to both the enemy and the ally and at the same time makes both meaningless. It results in politics=emotion that avoids violence. It was created outside the binary opposition of enemy and ally.

The Natural Born Intelligence (NBI) we are proposing can be considered a generalization of this system of gentle biting, different from a double bind. NBI is an intelligence that simultaneously establishes a positive antinomy that accepts both parties that constitute a binary opposition and a negative antinomy, and summons the outside of the binary opposition. As mentioned above, it can be understood that a simple attack or an AI framework remains a positive antinomy and lacks a negative antinomy.

NBI can be found not only in societies but in brains, animal groups, and even simpler chemical reaction systems. It can be exemplified as a system that uses quantum logic to operate with fluctuations external to quantum logic. The positive antinomy of different Hilbert spaces is quantum entanglement, and ignoring this and accepting fluctuations that go back and forth between different Hilbert spaces is negative antinomy. When both are accepted, quantum logic breaks down, but if one still tries to use quantum logic while maintaining its logicality, then conversely, extremely large fluctuations will be summoned from the outside. They can be thought of as a kind of quantum coherence that resonates with the movements of different Hilbert spaces. When applying the framework of NBI to neurons, there is no doubt that the emergence of emotions = politics can also be understood by using fluctuations external to quantum theory.

  • Open access
  • 0 Reads

A Cybernetic Approach to the Intentionality of Artificial Systems: A Regulatory Function Instead of a Representational One?

The question of whether artificial systems can possess intentionality—the ability to be "about" something—is a central issue in the philosophy of mind and cognitive science. Traditional theories offer different explanations: representationalism sees intentionality as a matter of internal symbols, functionalism focuses on causal roles, and enactivism ties it to embodied interaction. However, each of these approaches has its limitations. Representational theories struggle with the symbol grounding problem, functionalism risks reducing intentionality to mere input–output processing, and enactivism may be too restrictive in limiting intentionality to biological organisms.

Cybernetics provides a fresh perspective by redefining intentionality as a regulatory function rather than a representational property. In this view, intentionality emerges from the way autonomous systems regulate themselves through feedback loops and homeostasis. Instead of being about static internal representations, it is about dynamic adaptation—how a system maintains stability and achieves its goals within a changing environment. By this definition, even simple artificial systems that adjust their behavior based on internal states and external conditions can exhibit a minimal form of intentionality.

This presentation explores how cybernetic principles can reshape our understanding of intentionality in AI. Can artificial systems develop intentionality through adaptive regulation, even without consciousness? If so, what conditions would need to be met for AI to be considered truly intentional in a way comparable to living beings? These questions will be examined by integrating insights from the philosophy of mind, cognitive science, and systems theory.

By moving away from static, symbol-based models and toward a more dynamic, process-oriented view, this approach could lead to a more nuanced understanding of intentionality, one that bridges the gap between human cognition and artificial intelligence. Ultimately, this perspective may offer new ways of designing AI systems that are not only functionally competent but also capable of self-directed regulation, a key feature of intelligent agency.

  • Open access
  • 0 Reads
A Framework for the Ethical Design and Accountability Attribution of Military Robot Systems

Recent advances in the use of military robots in combat raise serious ethical questions. Military robots have expanded their role in combat, from simple reconnaissance and surveillance to deadly strikes on enemy positions. With the advancement of military robotics technology, the military continues to push for greater autonomy of military robots to reduce the cost of operation and maintenance of military robots. However, as military robots begin to make decisions on their own, the moral responsibilities of military robots become blurred.

If a drone mistakenly destroys a school instead of the right target, who is responsible? Therefore, how to design an ethical military robot? How to reasonably hold a military robot accountable for manslaughter? How the state, society, and even individuals respond to the challenge of military robots will become particularly important and urgent.

This article is divided into five parts: Section 1 provides a background introduction to the ethical design and accountability of military robots. Section 2 summarizes the current status and position of military robots and future development trends, the advantages and disadvantages of military robots compared with humans, the uniqueness of military robot ethics, the difficulty of military robot ethical design, and the dilemma of the accountability of military robots. Section 3 focuses on describing how to design ethical algorithms for military robots. Robotics Ethics outlines and justifies an approach for crafting and assessing ethical algorithms utilized in autonomous machines, including self-driving vehicles and military rescue robots. Derek Leben contends that the evaluation of these algorithms should hinge on their effectiveness in fostering cooperation among self-interested entities. Based on the fact that moral judgments are the product of evolutionary pressures for cooperative behavior in self-interested organisms, this paper compares the results of the application of Rawls' contractualism with the application of utilitarianism, free-willism, deontology, and other moral theories to various dilemma games, and argues that only contractualism can produce Pareto-optimality and cooperative behavior in a variety of dilemmas. The Leximin program, a refined version of the Maximin algorithm based on the contractualist difference principle, is suitable for application to a wide variety of decision spaces. Section 4 centers on the attribution of responsibility for military robots. This paper contends that the challenge of determining the accountability of autonomous robots can be resolved by situating it within the framework of the military chain of command. Decision-making in war is multi-layered, and the military hierarchy is a system that assigns responsibility and limits autonomy among different levels of decision-makers. Section 5 proposes specific countermeasures to address the ethical issues of military robotics. At the technological level, algorithms that continue to optimize the sensitivity of military robots to harm while continuing to increase the level of automation of military robots are an absolute measure to ensure the technological safety of military robots. At the policy and regulatory level, the transparency of military robotics research and development should be strengthened, along with the accountability of those involved.

  • Open access
  • 0 Reads
Transitive Self-Reflection—A Fundamental Criterion for Detecting Intelligence

The concept of transitive self-reflection is deeply rooted in philosophical discourses, including Hegel’s notion of "recognition" [1] and Sartre’s "being-for-others" [2]. These ideas emphasize the necessity of external interaction for self-awareness. Modern cognitive science complements these perspectives, highlighting how transitive self-reflection engages both introspection and external feedback to refine one's sense of self [3]. This contrasts with intransitive self-reflection, which focuses solely on internal states, lacking the enriching influence of external engagement [4].

The concept of intelligence has been defined in various ways, with numerous tests designed to measure its presence. A common underlying assumption in these definitions is that human cognitive abilities serve as intelligence benchmarks. In this paper, we adopt the same perspective: for a system to be considered intelligent, its cognitive performance must be comparable to that of humans. A system lacking a fundamental aspect of human cognition cannot be deemed truly intelligent. Based on this premise, we explore a crucial prerequisite for intelligence.

Self-reflection—the ability to introspect and analyze one's thoughts, emotions, and actions—is often regarded as a hallmark of higher intelligence [5]. However, an even more sophisticated indicator is what we term "transitive self-reflection". This concept extends beyond mere self-awareness; it involves understanding not only oneself but also how one is perceived by others, as well as how others perceive each other's perceptions of oneself [6]. Such multi-layered awareness indicates a complex cognitive architecture capable of modeling intricate social dynamics and predicting the cascading effects of one's actions within a network of interacting minds [7].

Transitive self-reflection is not limited to social interactions but also extends to how one's image is mirrored in the environment—through reflections in mirrors, water, or other reflective surfaces. These external representations provide an additional perspective on self-awareness, reinforcing the ability to perceive oneself from an external viewpoint [8].

Evidence of transitive self-reflection is apparent in various aspects of human behavior. Social interactions require individuals to constantly monitor and adjust their behavior based on how they believe they are perceived [6]. Gossip exemplifies this process, as people attempt to infer how their actions are interpreted and relayed by others [5]. Strategic thinking in games like poker demands an advanced level of transitive self-reflection, where players must anticipate not only their opponents' strategies but also their opponents' understanding of their strategies. Even emotions such as embarrassment arise from transitive self-reflection, as individuals become aware of how they are perceived in a negative light [8].

This study examines transitive self-reflection as a fundamental criterion for intelligence, particularly in the context of artificial intelligence. We investigate its manifestation in humans, its potential presence (or absence) in other animals [9], and the feasibility of replicating it in machines. Ultimately, we argue that transitive self-reflection may be the key to advancing the next generation of intelligent systems. To explore this, we conduct a series of experiments with several popular artificial intelligence systems based on Large Language Models (LLMs), assessing the type of intelligence they currently exhibit.

These systems cannot independently produce genuinely new ideas. However, they serve as effective tools for trained specialists, who can iteratively refine their outputs to construct meaningful works.

The structure of this paper is as follows: Chapter 2 introduces the concept of transitive self-reflection, and Chapter 3 explores its connection to intelligence. The work concludes with a discussion of future directions.

Bibliography

  1. Hegel, G. W. F. (1977). Phenomenology of Spirit (A.V. Miller, Trans.). Oxford University Press. (Original work published 1807).
  2. Sartre, J. P. (1956). Being and Nothingness (H.E. Barnes, Trans.). Philosophical Library. (Original work published 1943).
  3. Frith, C. D., & Frith, U. (1999). Interacting minds--a biological basis. Science, 286(5445), 1692-1695.
  4. Mead, G. H. (1934). Mind, Self, and Society from the Standpoint of a Social Behaviorist. University of Chicago Press.
  5. Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, 79(1-2), 1-37.
  6. Goffman, E. (1959). The Presentation of Self in Everyday Life. New York: Anchor Books.
  7. Tomasello, M., Call, J., & Hare, B. (2003). Chimpanzees understand psychological states – the question is which ones and to what extent. Trends in Cognitive Sciences, 7(4), 153-156.
  8. Tennen, H., & Affleck, G. (1991). The puzzles of self-esteem: A clinical perspective. In Snyder, C.R., & Forsyth, D.R. (Eds.), Handbook of Social and Clinical Psychology: The Health Perspective (pp. 100-119). New York: Pergamon Press.
  9. Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515-526.
  • Open access
  • 0 Reads
Thinking as Decolonial Praxis

In our phenomenal world, engagements across cultures in critical dialogue, forming concepts in language towards projects of building intelligence and sharing wisdom, have historically been troubled by misrepresentation and thoughtlessness. This makes intelligibility of the phenomenal world difficult. In spaces that constrain our efforts to be seen by others, globally and locally, how might we realize each other outside of dominant, socially fragmenting systems of knowledge and representation to produce new meaning and visibility in our thinking about self and world?

Taking knowledge systems as social and dynamic, this presentation addresses thought and appearance as connected phenomena in the production of knowledge. It articulates an account of thought as relational and actional among beings in the world of appearances to “engage the relationship between consciousness and our social and cultural situatedness.” [Martinez, J.] It understands knowledge as epistemē—in the Greek sense of πῐ́στμαι—as beyond information itself to include understanding and belief standing for knowledge in appearances (wissen und kennen). Often, knowledge is conflated with recognition or information. These are necessary but insufficient components of the intellectual faculty. A concept rooted in Latin, the intellect includes an aspect of discernment, perhaps even judgement, and along with thought requires a notion of experience as embodied; spatio-temporality. The approach is comparative and interdisciplinary. The perspective is decolonial and feminist critiques of modern epistemology as extractivist. [Alcoff, L.] The method is philosophical dialogue that understands thought as a dwelling amidst non-dominant differences in projects of knowledge and supports creative movement in oppressive knowing systems as epistemic liberation.

Thought is an important mode of organizing human life and the natural world. It begins with embodiment and exists in language. [Arendt, H.] It is not outside the realm of appearance and does not take us out of the world. Nor is it the supreme characteristic of the human condition. It is a particular faculty of being that can reasonably guide the thinker, as Aristotle described, toward wisdom—or Weisheit—as the state of having good judgement. Whereby the faculty of understanding synthesizes data and data-representations of the natural world into sense or meaning, thought arrives as a sort of communicative aspect in the phenomena of being and appearing. [Erkenntnis, Kant, I.] As a relational and active function, its logic is semiotic. [Pierce, C.S.] As a linguistic function, it includes misrepresentation.

Without awareness, many times, familiarity with our linguistic systems fosters thinking and communicative codes that act as hostilities in oppressed–oppressing relations. [Foucault, M., Lugones, M.] This includes the communicative world that we call digital and how we code the bots as objects of intelligence in relation to the natural world. To be sure, philosophic and scientific representations of natural world phenomena are problematic, especially when claiming to be singular and universal truths. The idea of a nation, for example, is hard to define and sustain. It appears as a known entity but is fixed politically in a significant effort to sustain itself; its language and border are made official, its linguistic and territorial nationalism often violently defended. [Barbour, S.] As we respond to the appearing phenomena and give meaning to everyday life experience, monologic and single-axis perceptions engaged by our knowledge systems miss intersections of identity and multiplicities of experience. [Crenshaw, K.] From within thinking systems as fixed and habituated, phenomena are made hidden. The outcome in the immediacy of human experience is thoughtlessness and unintelligibility. Among us, thought does not solve an immeasurable problem or provide a universal truth, it creates finite, provincial meaning. Yet, its “metaphysical fallacies on the contrary contain the only clues we possess to what thinking means to those who engage in it.” [Arendt, p. 23]. I take thought as less about uncovering truth than about making meaning. And, I understand thinkers as appearing beings who live in semiosis at the intersections of a multiplicity of confounding identities. [Spillers]

1 2 3 4 5
Top