Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Ethical Design of Social Robots Based on Confucian Etiquette Practices

Confucian ethics is a kind of virtue ethics, which, unlike utilitarianism and deontology, focuses on moral practice and emphasizes that moral norms are inevitably embedded in ritual practice. Confucian ethics is derived from the thought of Confucius, which emphasizes the virtues of “benevolence”, “righteousness”, “propriety”, and “wisdom”.[] Among them, “ritual” has a central position and is the foundation of social order and interpersonal relationships. In Confucian ethics, “rites” are not only social norms or etiquette, but also the basis of moral behavior and the key to maintaining social order and personal virtue. According to Confucius, etiquette can help people live in harmony in society and form a stable social structure. The practice of etiquette includes norms of behavior in daily life, ritual activities, and respect for elders and social roles.

Confucian etiquette practices are characterized by the following aspects: First, respect and modesty. Etiquette requires individuals to demonstrate respect and modesty for others in their interactions, and this respect is expressed in courtesy, consideration, and appropriate demeanor, emphasizing the principle of putting people first. At the same time, this respect and modesty should be mutual behavior for both sides of the relationship, “Li Ji - Qu Li Shang” said: “what the rules of propriety value is that reciprocity. If I give a gift and nothing comes in return, that is contrary to propriety; if the thing comes to me, and I give nothing in return, that also is contrary to propriety.” [] Second, harmony and mediocrity. Confucianism advocates the pursuit of harmony and mediocrity in all kinds of social interactions, avoiding extremes and opposites, and emphasizing balance and neutrality. Third, the top and bottom are in order. Confucian etiquette focuses on the maintenance of social hierarchy and order, such as respecting elders and authority, and showing proper manners in interactions. Fourth, the cultivation of interpersonal relationships.

Social robots, as artificial intelligences with a certain degree of autonomy, can be designed and functioned in a way that deeply integrates the practical features of Confucian etiquette. []“Hiding etiquette in artifacts” is the core content of Confucian ethical thinking on science and technology, emphasizing the concept of embodying and passing on etiquette through the production and use of artifacts. In ancient China, people incorporated etiquette into the process of making and using artifacts to strengthen social members' knowledge of and compliance with etiquette, which realized “teaching without words,” i.e., conveying moral norms and social values through practical actions and the display of objects. As a kind of “formal” approach, “ritual” needs to be embodied through certain approaches, and “artifacts” are precisely an important way to embody and manifest “ritual”.[] In the design of social robots, we can observe the social etiquette and law in the design of its language function, and take the social etiquette and manners as a guide in the design of its action, so that we get a robot that acts with “manners” in its language and action.

  • Open access
  • 0 Reads
The Cognition System Theory
,

We state that when we think about cognition, we must have an ideal image or images of it in our minds at every moment. This paper proposes a cognition model as the ideal image for discussion. It is more of a theoretical or hypothetical perfect model than an entirely abstract one, as it can be completely concrete. This is so because it represents a theoretical position we hold almost meditatively, serving as a mental foundation critical for thinking about any matter. It is a perspective taken by us from an observer's position, which helps us continually reflect on cognition, ourselves, objects of thinking, and knowledge. Ultimately, cognition from the observer's position is thinking about cognition. Thus, we propose one possible approach that takes a holistic view, which we conceive as the theory of the cognition system. Taking this approach, we examine the co-construction of ontologies and epistemologies and employ a three-tier methodology to create a position from which a thing, concept, or matter can be grasped and thought about.

This system represents a complex and dynamic structure of ideas and models the concept of cognition by defining three interconnected levels: philosophical, general scientific, and specific scientific modelling. Based on these premises, we create the position by first determining the philosophical grounds, which involve reflecting on core ideas, shaping a paradigm, and formulating principles and approaches. Next, we define the general modelling principle, outlining the forms of knowledge and methods used to model cognition and explore its elements, whether as a phenomenon, process, or tangible object, and us as the empirical subjects and consciousness. Finally, we select from various scientific approaches, concepts, theories, and research principles to observe cognition through specific entities. This three-level conceptualisation is reflected in the presentation's three sections. Then, the fourth section presents the synthetic culmination, answering the question about the observer's nature and its conceivable actions and potentials.

  • Open access
  • 0 Reads
From “no sense of place” to “no sense of the world”: Screen-centered senses and the problem of worldlessness

This article investigates the transformative relationship between screen technologies and human perception through an interdisciplinary framework combining Heidegger’s “revealing/concealing thesis” and postphenomenological analysis. The study systematically examines how screens mediate human-world interactions, reshape sensory engagement, and ultimately reconfigure existential spatial-temporal awareness.

First, Heidegger’s dialectic of technological revealing/concealing provides the foundational lens. While screens reveal hyper-visualized information flows, they simultaneously conceal their material infrastructure and algorithmic operations. This dual process creates an asymmetrical mediation where users engage with curated realities while remaining oblivious to the underlying technical processes that construct them. Postphenomenology further unpacks this mediation by analyzing how screens actively reconfigure perception rather than merely transmitting neutral content.

Second, Albert Borgmann’s “device paradigm” reveals screens’ paradoxical nature as both focal objects and invisible conduits. While screens demand sustained attention through luminous interfaces and infinite scroll dynamics (focal properties), they operate through a “device logic” that obscures their socio-technical complexity. This tension manifests in what we term “distracted immersion”—users become deeply absorbed in screen content while remaining detached from the contextual realities these interfaces mediate.

Third, Meyrowitz’s “no sense of place” thesis is expanded through contemporary screen practices. Location-independent behaviors enabled by smartphones and video conferencing induce spatial dislocation, where physical environments become interchangeable backdrops to screen activities. This de-localization fosters a paradoxical “ubiquitous placelessness”—users operate simultaneously everywhere (digitally) and nowhere (physically), eroding embodied connections to specific locales.

Fourth, postphenomenological distinctions between screen-mediated and screen-centered perception clarify escalating technological influence. Screen-mediated perception enhances human capabilities (e.g., microscopes extending vision), while screen-centered perception creates self-referential ecosystems (e.g., social media feeds prioritizing algorithmic engagement over external reality). The latter generates a closed-loop effect where sensory input and behavioral output both originate from screen interfaces, progressively detaching users from unmediated lifeworld experiences.

Finally, Arendt’s concept of “worldlessness” frames the socio-political consequences. As screens fragment shared reality into personalized digital enclaves, the sensus communis essential for public discourse deteriorates. Social media’s filter bubbles and recommendation algorithms exemplify this erosion, replacing communal frames of reference with hyper-individualized perceptual universes. This dissolution of common ground manifests in polarized discourses and the collapse of shared epistemic foundations.

The analysis concludes that screens function as existential mediators reshaping human perception across three dimensions: 1) perceptual (mediated vs. direct experience), 2) spatial (disembodied presence vs. situated embodiment), and 3) social (fragmented collectives vs. communal world-building). These transformations demand critical examination of how screen technologies reconfigure the fundamental structures of human experience and collective existence.

  • Open access
  • 0 Reads
Axiology and the Evolution of Ethics in the Age of AI

Artificial intelligence (AI), particularly autonomous systems, challenges traditional ethical frameworks by reshaping human values, agency, and responsibility. This paper argues that axiology—the philosophical study of values—offers a critical foundation for AI ethics by accommodating the dynamic relationship between technology and morality. Unlike rigid ethical theories, axiology provides an adaptive approach to algorithmic bias, depersonalized healthcare, and AI-mediated governance.

We propose an integrative axiological model that synthesizes deontological, utilitarian, and virtue ethics to ensure that AI aligns with pluralistic human values. This framework balances duty (transparency, fairness), outcomes (social good, efficiency), and virtue (human dignity, trust), akin to multicriteria decision analysis (MCDA) (Sapienza, Dodig-Crnkovic, Crnkovic, 2016), which systematically evaluates competing priorities in complex decision-making. For example, while utilitarianism might favor AI’s cost-saving healthcare diagnostics, virtue ethics ensures patient autonomy remains central, and deontology requires transparency in algorithmic decisions. This synthesis prevents AI from privileging one value (e.g., efficiency) at the expense of others (e.g., privacy) but relies on multicriterion-based decisions.

Case studies illustrate AI’s dual impact: In education, AI-powered learning enhances accessibility but may risk dehumanizing assessment. In healthcare, AI-driven diagnostics improve accuracy, yet excessive reliance on AI may threaten patient trust if empathy is overlooked. In governance, AI improves transparency but may raise ethical concerns over surveillance and bias in policing. These examples underscore the need for an evolutionary ethics, where values shift alongside technological advances.

This model aligns with Digital Humanism, which resists reducing humans to data points, and Responsible AI, which prioritizes accountability. Together, they advocate for AI that enhances—not undermines—human dignity, equity, and democratic agency.

To prevent ethical stagnation, policymakers and developers may adopt this axiological lens, ensuring that AI evolves as a tool for societal flourishing rather than a destabilizing and depersonalized force. By focus on axiology, we reframe AI ethics as a living discipline—one that reconciles competing values and safeguards humanity’s moral commitments in an AI algorithmic age.

References

Sapienza, G., Dodig-Crnkovic, G. and Crnkovic, I. (2016) "Inclusion of Ethical Aspects in Multi-Criteria Decision Analysis". Proc. WICSA and CompArch conference. Decision Making in Software ARCHitecture (MARCH), 2016 1st International Workshop. Venice April 5-8, 2016. DOI: 10.1109/MARCH.2016.5, ISBN: 978-1-5090-2573-2. IEEE

  • Open access
  • 0 Reads
Application of Burgin's General Theory of Information: Autopoietic and Meta-Cognitive Machines

Biological systems inherit the knowledge to build, operate, and manage a society of cells with complex organizational structures, where autonomous components execute specific tasks and collaborate in groups to fulfill systemic goals using shared knowledge. This knowledge enables them to receive information through various senses and create a knowledge representation in the form of associative memory and an event-driven interaction history, consisting of various entities, relationships, and behaviors. The General Theory of Information, posited by the late Prof. Mark Burgin, allows us to model knowledge in the form of named sets/Fundamental Triads that capture various entities, relationships, interactions, and behaviors.

In this paper, we describe a digital genome implementation of distributed software systems that exhibit autopoietic and meta-cognitive behaviors using a knowledge representation capturing various entities, relationships, interactions, and behaviors. The resulting associative memory and event-driven interaction history allow us to implement self-regulating software that manages its specified cognitive workflows. Two use cases are demonstrated.

The first use case leverages a digital genome to design, deploy, operate, and manage a distributed video streaming service. The system uses associative memory and an event-driven interaction history to maintain structural stability and ensure smooth communication among distributed components. This allows the application to self-regulate and adapt to changing conditions while maintaining expected behaviors.

The second use case demonstrates the implementation of a medical knowledge-based digital assistant designed to assist in the early diagnostic process. The digital assistant is intended to reduce the knowledge gap between patients and medical professionals. It leverages medical knowledge from various sources, including large language models, to provide accurate and timely information. The assistant uses associative memory and an event-driven interaction history to create a comprehensive knowledge representation. This helps in understanding the patient's symptoms and medical history, facilitating better communication and decision-making between patients and doctors. By bridging the knowledge gap, the digital assistant enhances the early diagnostic process, leading to more accurate diagnoses and improved patient outcomes. It supports medical professionals by providing relevant information and insights based on past interactions and learned knowledge.

Cognizing oracles, supersymbolic computing, structural machines, knowledge structures, and knowledge networks are key concepts used in the applications described in these use cases. Cognizing oracles interpret observations and make informed decisions using associative memory and an event-driven interaction history. Supersymbolic computing integrates symbolic and subsymbolic representations to enhance its information processing capabilities. Structural machines are unconventional knowledge processors that work with knowledge structures, ensuring that software systems can self-regulate and adapt. Knowledge structures organize representations of entities, relationships, interactions, and behaviors, enabling autopoietic and cognitive behaviors in software systems. A knowledge network facilitates the sharing and integration of information across different components. These concepts collectively enable the creation of intelligent, adaptive systems that manage complex tasks and improve user experiences, such as in distributed software systems and medical knowledge-driven digital assistants.

These advances demonstrate the usefulness of a theory that relates information, knowledge, matter, and energy and provides a schema for knowledge representation and operations, enabling a new class of digital automata.

  • Open access
  • 0 Reads
From the Expansion of Economic Rationality to Its Re-Nationalization: Ethics, Purpose, and Geopolitics

Economic rationality has evolved significantly over the past decades, shifting from profit maximization to stakeholder theory and, more recently, to the emergence of purpose as a guiding principle for businesses. This shift reflects a broadening of economic intelligence, incorporating ethical, social, and environmental dimensions into corporate decision-making. However, rising geopolitical polarization is reshaping this paradigm: while companies increasingly embrace sustainability and inclusivity, the “cunning of economic reason” (borrowing from Hegel) now appears to align purpose with national strategic interests. This raises a fundamental question: is economic rationality truly evolving, or is it being reabsorbed into the logic of power?

This paper adopts a moral philosophy and business ethics perspective, critically engaging with the frameworks proposed by Robert Edward Freeman, Michael E. Porter, and Mark R. Kramer. It examines how the evolution from stakeholder theory to shared value is now undergoing a further transformation, where business purpose is increasingly shaped by geopolitical imperatives. The analysis problematizes these classical models, showing that we are moving beyond them—not by returning to Milton Friedman’s assertion that “the social responsibility of business is to increase its profits,” but by entering a new paradigm where corporate ethics must reconcile global responsibilities with national allegiances. Case studies of SpaceX, Anduril Industries, and Palantir Technologies illustrate how ethical frameworks are being redefined within this tension.

The findings indicate that the boundary between corporations and States is increasingly blurred: major tech firms not only generate economic and social value but also function as geopolitical assets, reinforcing national sovereignty through technological dominance. Palantir, for instance, exemplifies how economic intelligence becomes an extension of State power, merging AI, data control, and defense strategy. In this scenario, economic intelligence is shifting from an emancipatory force to a mechanism of national hegemony. In the uncertainty about whether purpose should prioritize the global or the local, lies the core tension between ethics and realism.

This paper highlights a critical dilemma: as economic intelligence becomes increasingly tied to State interests, corporate ethics risk losing their autonomy, turning into instruments of digital raison d’État. If shared value is reduced to strategic advantage, can we still consider this an evolution of economic rationality, or is it a strategic involution, where corporate ethics are absorbed into power politics? In an era dominated by AI and big tech, preserving an authentic business ethic requires careful scrutiny of regulatory frameworks and cultural narratives to ensure that purpose does not become a new expression of 21st-century realpolitik.

  • Open access
  • 0 Reads
Evolution of Intelligence from Active Matter to Complex Intelligent Systems: An Agent-Based Approach

An agent-based framework is proposed to describe the emergence of complex intelligent systems, starting from active matter and progressing towards increasingly cognitive/intelligent systems. The distributed, concurrent information processing by different types of agents—from physical, chemical, and biological entities to ecosystems, and social systems—in this approach bridges multiple levels of organisation. It provides an interdisciplinary synthesis that explains the role of agents in shaping emergent behaviours as foundations of cognition and intelligence, through developmental and evolutionary processes. The framework offers new insights into the organisation of natural agents and the evolution of natural and artificial intelligent systems.

The capacities we associate with agents originate in active matter, which manifests at various scales, from particles and molecules to entire ecosystems. Phenomena like self-assembly, self-organization, and autopoiesis represent systems’ innate ability to self-maintain and drive increasing structural and functional sophistication [1].

At the most fundamental level, Hewitt’s Actor Model [2] allows us to think of particles and molecules as computational agents involved in a continuous message exchange. Such frameworks show how richly patterned behaviors can emerge from relatively straightforward interactions at different scales. For example, in the molecular domain, as described by Mathews et al. [3], networks of molecules and cellular signaling pathways exhibit forms of memory, problem-solving, and adaptive reprogramming,which are rudimentary cognitive features.

Recognizing cognition as a defining characteristic of living agents connects it directly to the chemical and biological foundations of life [4]. Even relatively simple organisms, such as bacteria, process information from their surroundings, adapt their behavior accordingly, and thus engage in a basic form of cognition [5]. These fundamental computations underpin the emergence of more advanced cognitive processes in complex organisms.

In synthesising of this broad spectrum of agent-based approaches, we arrive at a coherent conceptual framework to investigate fundamental questions: How does intelligence arise naturally? What roles do agents and cognition play in ecosystems and the evolutionary process? How are material agents organised into hierarchies of complexity, and can artificial intelligence extend the cognitive reach of humanity?

Agent-based thinking offers a powerful, integrative tool for understanding understand both natural and engineered systems, illuminating the conditions under which intelligence and complex behaviour emerge. These perspectives set the stage for future research directions, increasing our grasp of the evolving interplay between life, information, and cognitive technologies.

References

  1. Dodig-Crnkovic, G. (2017) Cognition as Embodied Morphological Computation. In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 19-23.
  2. Hewitt, C. (2010) Actor Model of Computation: Scalable Robust Information Systems. https://arxiv.org/abs/1008.1459
  3. Mathews, J., Chang, A. J., Devlin, L., & Levin, M. (2023). Cellular signaling pathways as plastic, proto-cognitive systems: Implications for biomedicine. Patterns (New York, N.Y.), 4(5), 100737. https://doi.org/10.1016/j.patter.2023.100737
  4. Ford, B.J. (2023) The cell as secret agent—autonomy and intelligence of the living cell: driving force of Yao. Academia Biology;1. https://doi.org/10.20935/AcadBiol6132
  5. Miller, W. B. (2023) Cognition-Based Evolution. Natural Cellular Engineering and the Intelligent Cell. Routledge. Taylor & Francis. https://doi.org/10.1201/9781003286769
  • Open access
  • 0 Reads
Knowledge as an emergent effect of the complexity of large language models

The following abstract contains a proposal for interpreting the phenomenon of knowledge that appears during the exploitation of large language models (LLMs). As the ontological context of this knowledge, language is proposed, which is also a social phenomenon. Knowledge is interpreted here as an emergent effect of the complex system, which is the working LLM. To explain this effect, we propose to build a trajectory for each input token which is based on the theory of discursive space. These trajectories traverse the manifold that contains all the computational spaces that each token traverses. This reasoning is justified by the nonlinear and nondeterministic nature of LLMs. This task requires at least a preliminary understanding of the descriptive category of what knowledge is. Evidence of the importance of knowledge in the context of LLMs is also provided by research, e.g., Fierro et al., Heersmink et al., Peterson, Kim, and Thorne.

The approach to knowledge here is pragmatic, which means that it is assumed that the form of retention/articulation of knowledge is broadly understood linguistic utterances. Language is a product of social circumstances and therefore remains permanently determined historically and territorially. It also produces deep, abstract phenomena capable of transferring knowledge, which primarily include discourse. This approach was proposed by Michel Foucault.

The result of LLM training is a stable computational structure of great complexity, which is illustrated by the large number of parameters. The resulting structure, which is a composition in time of many multidimensional computational spaces, is not strictly deterministic.

In the process of processing semantic inferences in a model based on a stable, trained set of parameters, a specific "phase transition" occurs from the numerical level to the epistemic level (of knowledge as a phenomenon). The concept of emergence can be used to describe this "transition", and the hypothesis can be formulated that knowledge is an emergent effect of a complex LLM system, in the sense given to emergence by Philip Clayton, developing the definition by el-Hani and Pereira.

The emergence problem can theoretically be solved by a model of the trajectory of semantic signatures, i.e., vectors in different spaces that build the LLM, according to the order of calculations.

The formal way of analyzing knowledge in LLMs is to reconstruct the trajectories of semantic signatures in a manifold, which would consist of all the computational spaces of the model. These trajectories would be a representation of the particular model's knowledge and would be visible to the outside as an emergent phenomenon of knowledge. This approach would constitute an extended application of the definition of knowledge proposed in the theory of discursive knowledge by Rafal Maciag: “knowledge is a set of states of gnosemes in an n-dimensional manifold that can be interpreted locally as a knowledge space”.

Philisophical issues:

  1. Epistemological issues—the problem of knowledge as a social phenomenon;
  2. Ontological issues—the problem of complex systems and accompanying phenomena, e.g. emergence;
  3. Hermeneutical issues—since the subject of the research is an artificial text, the hermeneutical context becomes important.
  • Open access
  • 0 Reads
Intelligence as the Capacity to Overcome the Complexity of Information: The Search for Unity in Diverse Forms of Intelligence
  1. Introduction and Motivation

In his 2011 bestseller book “Thinking Fast and Slow” [1], Daniel Kahneman introduced one of his rules of common thinking: “A reliable way to make people believe in falshoods is frequent repetitions because familiarity is not easily distinguished from truth.” This rule applies to sentences that can be qualified as true or false. A similar rule can be applied to concepts linking words or terms with their meanings. There seems to be sociolinguistic correspondence between the frequency of the use of terms or expressions, in particular those with philosophical significance, and the unrecognized diversity of their understanding. This diversity is obscured by the fact that the frequency of the use of these words generates the impression of familiarity and familiarity the illusion of an identity and existence of a unique denotation independent from any inquiry, the conviction that the meaning is obvious and uniform. As a result, people use frequently invoked words with a diverse subjective understanding while being convinced about their objective, uniform, commonly shared meaning. If they are confronted with differences in the understanding presented by others, they claim that such a different understanding is erroneous.

Intelligence, either artificial, natural, or human, has become an illustrative instance of this regularity. The term is present everywhere, in particular when qualified as artificial and used in its staple abbreviation AI, or when it appears unqualified when understood as human.

There are some curious differences between the ways in which human and artificial forms of intelligence are viewed. For a long time, there has been an unusual consensus, among experts and laypeople, that human intelligence is not only diverse but that its different forms are independent and possibly uncorrelated, each with a separate psycho-neurological mechanism. Thus, human intelligence can be fluid or crystallized following the division introduced in 1943 by Raymond Cattell [2], splitting Charles Spearman’s concept of general intelligence present in psychology since the beginning of the 20th century into two different capacities. The former was understood as a purely general ability to solve unexpected problems without prior preparation or experience, and the latter consisted of long-established discriminatory habits acquired through learning or training. Incidentally, this distinction can be considered a precedent to Kahneman’s popular and recent distinction of fast (habitual, rigid) and slow (goal-oriented, flexible) thinking [1].

Further divisions of intelligence came in the 1980s. Robert Sternberg introduced his triarchic theory of intelligence, separating it into analytical, creative, and practical intelligence [3]. At about the same time, Howard Gardner introduced in his 1983 book “Frames of Mind: Theory of Multiple Intelligences” [4] the idea of “intelligences” in the plural form and presented a list originally consisting of seven types: linguistic, logical–mathematical, spatial, musical, bodily–kinetic, interpersonal, and intrapersonal. In 1995, he added an eighth type: naturalistic intelligence. This triggered multiple attempts to distinguish specific types of such multiple intelligences. Divisions of human intelligence have been made using diverse criteria and different levels of argumentation, but they have always been motivated by criticism of the inadequacy of the Spearman’s original concept of general intelligence and attempts to measure it using variations of William Stern’s and Alfred Binet's/Theodore Simon’s IQ tests. Today, there is no agreement regarding the selection of criteria, names, and the degree of correlation, but the idea of multiple intelligences became a standard in the study of human cognitive abilities.

The typical view of artificial intelligence (AI) relates it to human intelligence (such as a simulation, emulation, or recreation of the latter), and the main goal of current technological research and innovation is the achievement of artificial general intelligence (AGI), which does not have a commonly accepted or even discussed definition but is presented to the general audience in descriptions of the following type: “AGI [...] a system that is capable of matching or exceeding human performance across the full range of cognitive tasks.” [5]. This category error, blurring the distinction between the abstract concept and its individual realizations, characteristic of virtually all instances of the discourse on artificial intelligence (general or otherwise), perpetuates the hidden comprehension of its unity.

There are separate names for specific types of technological realizations of AI (generative AI based on Large Language Models (LLMs) in neural networks with a deep learning architecture, followed by Large Reasoning Models (LRMs) heavily dependent on external prompts, i.e., minimizing their autonomy; agentic AI, re-engaging some forms of algorithmic computation, still dependent on initial prompts but with increasing autonomy in its operation through auto-reprompting; neuro-symbolic AI with a hybrid architecture; causal AI; etc.) However, all of these qualifications reflect not the conceptual diversity of artificial intelligences but the technological differences in the search for the realization of the same goal of artificial general intelligence (AGI).

The inconsistency between conceptualizations of human multiple intelligences and the uniform idea of artificial general intelligence matching or even surpassing human cognitive abilities is only one of the many manifestations of conceptual chaos in the study of intelligence. The situation becomes even more complex when we consider the extensions of intelligence understood as a characteristic of natural but non-human entities present in diverse forms of life on Earth or the expected but not yet known forms of extraterrestrial life. This complication arises not only from the association of intelligence with life, whose conceptualization has been similarly convoluted, but also from the increased diversity of the morphological and behavioral forms involved in manifestations of intelligence in living objects, which require the careful avoidance of anthropomorphization.

2. The Philosophical Significance of Intelligence in Terms of Information and Complexity

Is this conceptual complexity of intelligence or intelligences a good reason to question the feasibility of, or the justification, for any attempts to seek a unifying perspective on such a complex variety of related yet diverse forms of what in multiple contexts is called by the same common-sense name of “intelligence”? This paper has as its objective the justification of a negative answer to this question. In this case, as in many other cases known from the intellectual history of humanity, the phenomenal complexity of the manifestations of intelligence is unquestionable, but the complexity of their perception and comprehension can be overcome with the use of the appropriate intellectual tools.

The tools proposed here are appropriately general concepts of information and its complexity, together with already known methods of reducing or controlling the complexity of information, with their long history going back at least to the Law of Requisite Variety proposed by W. Ross Ashby [6]. In this paper, the methods of reducing complexity are traced much further back in their association with intelligent or efficient inquiry. The use of the concept of information brings its own challenges, as the term information is still a subject of the rule of illusionary uniformity of meaning in its common-sense use considered at the beginning of this paper. However, in the philosophy of information, this issue has been adequately addressed, and this intellectual experience can be now applied to intelligence.

Of course, the proposed tools can be used only if we assume that intelligence can be relativized to the concept of information. However, all existing studies of intelligence contain this assumption, even if sometimes in a hidden form. Moreover, the philosophical significance of the concept of information, in particular in its association with complexity, brings into the inquiry an extensive toolbox of philosophical inquiry that allows for the integration of methodologies developed for the research of particular domains of reality. At the same time, the study of intelligence acquires an interdisciplinary status.

After reviewing a wide range of fundamental concepts associated with intelligence in its diverse contexts, this paper presents an initial formulation of the general characterization of intelligence as the ability to minimize the resources necessary to effectively perform a maximal range of actions. However, it is shown that the use of the terms “resource” and “action” leads to overgeneralization in the absence of their qualification or identification of their meaning. For instance, the motion of every object can be described by the mechanical law of minimal action. On the other hand, the identification of resources may lead either to excessive restrictions, resulting in undergeneralization, or a vicious circle of reasoning. For this reason, the term “resource” is replaced with “information” and “action” with “overcoming complexity”. Then, intelligence is defined as the capacity to overcome complexity. Incidentally, with this understanding of intelligence comes the elimination of the complexity of its manifestations in diverse contexts by lifting the level of abstraction through the overarching concept of information. Therefore, our inquiry can be considered an intelligent inquiry into intelligence.

References

  1. Kahneman, D. Thinking, Fast and Slow. New York: New York: Farrar, Straus and Giroux, 2011.
  2. Cattell, R. B. The measurement of adult intelligence. Psychological Bulletin, 1943, 40(3), 153–193. https://doi.org/10.1037/h0059973
  3. Sternberg, R. J. Beyond IQ: A Triarchic Theory of Intelligence. Cambridge University Press, Cambridge, 1985.
  4. Gardner, H. Frames of Mind: The Theory of Multiple Intelligences. Basic Books, New York, N.Y. 1983, ISBN 978-0133306149.
  5. Jones, N. How AI can achieve human-level intelligence: researchers call for change in tack. Nature, News March 4, 2025, Retrieved March 7, 2025, from: How AI can achieve human-level intelligence: researchers call for change in tack.
  6. Ashby, W. R. An Introduction to Cybernetics. Chapman & Hall, London, 1956.
  • Open access
  • 0 Reads
Infoautopoiesis and Intelligence

A fundamental issue is whether artifacts can ever gain the status of living beings. In particular, the relevance of meaning-making by machines is an area of study whose relevance cannot be ignored, putting front-and-center the matter of intelligence, whose etymological origin relates to “the ability to understand” [1]. Since the release of ChatGPT on the 30th of November 2022 by OpenAI [2], countless other applications compete in searching through a massive amount of written text by reading millions of articles and books online to produce work that has perfect grammar, correct punctuation, and no spelling mistakes. Multiple uses include writing songs, stories, press releases, guitar tabs, interviews, essays, and technical manuals. In addition, hallucinations are also readily achieved. ChatGPT may be regarded as an autonomous agent, similar to a living organism capable of biological agency, which acquires, uses, processes, communicates, and acts on information [3]. This brings into focus the use of information as the glue that makes possible the examination of intelligence in living beings and their artifacts. The purpose of this presentation is to critically examine information in this role.

The long history of information uncovers an elusive concept that needs clarification [4, 5], and involves a dichotomy that needs resolution. For some, information is considered an absolute quantity of the Universe in addition to matter and/or energy, whose existence is predicated upon a postulate which some consider sufficient to bring into existence [6-11]. For others, it is a relative quantity/quality, ‘a difference which makes a difference’ [12], where “The essence of this definition is that information is something which is generated by a subject. Information is always information for "someone"; it is not something that is just hanging around "out there" in the world” [13]. This implies that there is no information outside living beings interacting with their environments. Clearly, the more reliable choice to use to perform a more detailed assessment of information is the one not dependent on the enunciation of a postulate, but rather, on firsthand observation.

The result of performing this assessment leads to infoautopoiesis, the self-referential, recursive, and interactive process of self-production of information [14]. Not only is it possible to create a model of a general organism-in-its-environment, but it is also possible to also define the roles of semantic (endogenous) and Shannon/syntactic (exogenous) information [15]. Semantic information creation is motivated by the individuated satisfaction of physiological and relational needs in order to make the external environment meaningful. Semantic information is inaccessible except through external Shannon/syntactic information expressions using language, gestures, pictographs, musical instruments, sculptures, writing, coding, etc. Shannon/syntactic information is a metaphor for the creation of all our artificial artifacts in the arts and sciences which surround us [16-18]. In summary, infoautopoiesis enables an explanation of how meaning-making comes about, and its use as a general framework to answer the key question: what is the connection between human and machine intelligence?

References

  1. Da Silveira, T.B.N. and H.S. Lopes, Intelligence across humans and machines: a joint perspective. Frontiers in Psychology, 2023. 14.
  2. Roose, K. The Brilliance and Weirdness of ChatGPT. New York Times, 2022. DOI: https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html.
  3. Dodig-Crnkovic, G. and M. Burgin, A Systematic Approach to Autonomous Agents. Philosophies, 2024. 9(2): p. 44.
  4. Capurro, R. and B. Hjørland, The concept of information. Annual Review of Information Science and Technology, 2003. 37(1): p. 343-411.
  5. Capurro, R., Past, present, and future of the concept of information. TripleC, 2009. 7(2): p. 125-141.
  6. Wheeler, J.A. Sakharov revisited: “It from Bit”. in Proceedings of the First International A D Sakharov Memorial Conference on Physics, May 27-31. 1991. Moscow, USSR: Nova Science Publishers, Commack, NY.
  7. Stonier, T., Information and Meaning - An Evolutionary Perspective. 1997, Berlin Heidelberg New York: Springer-Verlag.
  8. Yockey, H.P., Information theory, evolution, and the origin of life. 2005, Cambridge, UK: Cambridge University press.
  9. Lloyd, S., Programming the Universe. 2006, New York, NY: Alfred A. Knopf.
  10. Floridi, L., Information: A Very Short Introduction. 2010: Oxford University Press.
  11. Vedral, V., Decoding Reality - The Universe as Quantum Information. 2010, Oxford, UK: Oxford University Press.
  12. Bateson, G., Steps to an ecology of mind; collected essays in anthropology, psychiatry, evolution, and epistemology. Chandler publications for health sciences. 1978, New York: Ballantine Books. xxviii, 545.
  13. Hoffmeyer, J., Signs of meaning in the universe. 1996: Bloomington : Indiana University Press, [1996] ©1996.
  14. Cárdenas-García, J.F., The Process of Info-Autopoiesis – the Source of all Information. Biosemiotics, 2020. 13(2): p. 199-221.
  15. Burgin, M. and J.F. Cárdenas-García, A Dialogue Concerning the Essence and Role of Information in the World System. Information, 2020. 11(9): p. 406.
  16. Cárdenas-García, J.F., Info-Autopoiesis and the Limits of Artificial General Intelligence. Computers, 2023. 12(5): p. 102.
  17. Cárdenas-García, J.F., Information is Primary and Central to Meaning-Making. Sign Systems Studies - Special Issue on Contemporary Applications of Umwelt Theory, 2024. 52(3/4): p. 371-393.
  18. Cárdenas-García, J.F., Syntactic Touch: A Probe. New Explorations: Studies in Culture and Communication, 2024. 4(2).
Top