Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Key Methodologies in Intelligence Studies: Techniques, Skills, and Integration

Intelligence studies utilize a variety of methodologies to gather and analyze information, each requiring specialized skills and techniques. These methods include Open Source Intelligence (OSINT), which involves collecting data from publicly accessible sources such as websites and social media; Human Intelligence (HUMINT), which focuses on obtaining information through direct interaction with human sources; Signals Intelligence (SIGINT), which involves intercepting electronic communications; Geospatial Intelligence (GEOINT), which uses satellite imagery and geospatial data; and Cyber Intelligence (CYBINT), which focuses on monitoring cyber activities and threats. Each methodology employs distinct techniques, such as data mining, signal interception, and spatial analysis, and demands proficiency in skills like data processing, interpersonal communication, and technical analysis. These approaches are often integrated to provide comprehensive intelligence for decision-making and strategic planning.

  • Open access
  • 0 Reads
How Brooks' behavior-based robots teach us a lesson about the definition of knowledge

The world is its own best model (Brooks, 1989)! Or at least it is for Brooks’ behavior-based robots. Robots such as Herbert, whose task consisted in stealing empty soda cans from offices, and Squirt, whose task was to follow around noises, prove that complex behavior such as following specific sounds after a specific time interval, avoiding obstacles and real-time recognition (Brooks, 1990) can be achieved without any inner representation of the world. If we believe Hutto and Myin (2013), we human beings are not too different from Herbert or Squirt. Our behavior can be explained without assuming that we mentally represent our world. In my talk, I want to show evidence against this claim. While some of our behavior might be explainable without us representing the world, most of our storing-behavior of the interaction with our world we have is indicative of us representing the world mentally. My claim will be based on empirical evidence stemming from non-linguistically based spreading activation (Barr et al., 2014), false memories (Roediger, 2001), false recognition (Meade et al., 2007) and the processing of visuospatial information (Foster et al., 2017). I will, furthermore, expand this theory and argue for human beings being different in knowledge acquisition to behavior-based robots due to the nature of our memory. While behavior-based robots do not need to represent the world, we human beings have to represent the world in a specific manner to be able to continue to act with the world successfully. The knowledge we acquire is mainly based on our inner representation of the world.

Bibliography

Barr, R., Walker. J. Gross, J. & Hayne, H. (2014). Age-related changes in spreading activation during infancy, Child Development 85(2), 549-563.

Colombo, M. (2014). Neural representationalism, the Hard Problem of Content
and vitiated verdicts. A reply to Hutto & Myin (2013), Phenom Cogn Sci 13, 257–274.

Foster, P. S., Wakefield, C., Pryjmak, S., Roosa, K. M., Branch, K. K., Drago, V., Harrison, D. W. & Ruff, R. (2017), Spreading activation in nonverbal memory networks, Brain Informatics 4, 187-199.

Hutto, D. D. & Myin, E. (2013). Radicalizing enactivism, Basic minds without content, MIT Press.

Hutto, D. D. (2023). Remembering without a trace? Moving beyond trace minimalism, in: Current controversies in Philosophy of Memory, ed: Sant’Anna, A., McCarroll, C. J., Michaelian, K., 59-82.

Meade, M. L., Watson, J. M., Balota D. A. & Roediger, H.L. (2007). The roles of spreading activation and retrieval mode in producing false recognition in the DRM paradigm, Journal of Memory and Language 56, 305-320.

Roediger, H., Balota, D. A., Watson, J. M. (2001). Spreading activation and the arousal of false memories, The nature of remembering: Essays in honor of Robert G. Crowder, (ed) Roediger, H.L., Nairne, J.S., Surprenant, N. A. M., American Psychological Press, 95-115.

  • Open access
  • 0 Reads
Epistemic Architectures of Intelligence: A Feminist Critique of Androcentric Reason

The study of intelligence—whether human, artificial, or non-human—is shaped by epistemic frameworks that define knowledge, reasoning, and cognition. However, these frameworks are not neutral or universally inclusive. Drawing on feminist epistemology and the philosophy of science, this paper examines how dominant definitions of intelligence prioritize androcentric, rationalist, and computational paradigms, marginalizing embodied, affective, and relational forms of knowledge. This epistemic exclusion reinforces a restricted ontology of intelligence aligned with Western, masculinist conceptions of reason, disqualifying alternative intelligence systems, such as those found in artificial systems and collective cognition. Furthermore, it perpetuates a hierarchy wherein calculability, abstraction, and formal logic are valued over contextual, relational, and situated intelligences.

This exclusion is not merely a methodological limitation but a form of epistemic injustice that aligns intelligence with historically dominant, androcentric, and Eurocentric paradigms, silencing embodied and affective epistemologies. Feminist epistemologists have long critiqued how knowledge systems reinforce hierarchical exclusions, particularly in defining intelligence. They highlight how dominant epistemic frameworks privilege certain forms of cognition, such as formal logic, over other modes of knowing that are embodied and relational (Haraway, 1988; Harding, 1991). This paper extends these critiques to intelligence studies, arguing that the epistemic frameworks guiding AI research, cognitive science, and philosophy of mind reproduce structural biases that restrict our understanding of intelligence.

In response, the paper proposes a pluralist epistemology of intelligence that recognizes the cognitive diversity of both human and non-human systems. Intelligence should not be confined to traditional models of centralized, rule-based reasoning but must also encompass distributed, ecological, and embedded forms of cognition. This shift challenges conventional exclusions and expands the scope of intelligence research, opening space for non-human, artificial, and collective intelligences (Castán et al., 2021). This requires rethinking not only the ontological status of intelligence but also its epistemic foundations. Integrating feminist epistemology, posthumanist perspectives, and theories of distributed cognition offers a pathway toward a more inclusive and interdisciplinary study of intelligence.

This paper contributes to philosophical debates within intelligence studies, particularly regarding the epistemological, axiological, and ontological status of intelligence. It critiques the assumption that intelligence is best understood through rationalist and formal-logical paradigms, advocating instead for a pluralist, inclusive framework that recognizes diverse forms of cognition. Rethinking intelligence through an epistemically just framework is not only a conceptual necessity but also an ethical imperative, as the way we define intelligence determines whose knowledge is valued, whose cognition is recognized, and whose agency is acknowledged.

References:

  • De Togni, G., Erikainen, S., Chan, S., & Cunningham-Burley, S. (2021). What makes AI ‘intelligent’ and ‘caring’? Exploring affect and relationality across three sites of intelligence and care. Social Science & Medicine, 277, 113874.

  • Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Clarendon Press.

  • Haraway, D. (1988). Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies, 14(3), 575–599. https://doi.org/10.2307/3178066

  • Harding, S. (1991). Whose science? Whose knowledge?: Thinking from women's lives. Cornell University Press.

  • Open access
  • 0 Reads
Free Intelligence in the Wild: on Human and other Non-human Natural and Artificial forms

Human and non-human forms of intelligence exist in all facets of our lives. It is nowadays standard practice to differentiate something as intelligent or unintelligent in its making. In this context, intelligence nourishes from mental and physical stimuli, the interaction between our genes and environment, and the internal stasis of strictly cognitive and affective aspects. Therefore, in the wild, we confront indirect evidence rather than direct laboratory records of an observed capacity of learning and adaptation, which is continually repeated until it converges into an intelligent clue.

Earlier evidence of human Intelligence can be traced back to prehistoric art. The depiction of humans and animals in hunting scenes follows natural laws with a well-developed aesthetic sense and a level of intellect beyond survival needs. According to a postulate of the Kantian conception of history, nature acts for man until he is able to act as free intelligence. When man awakens from the torpor of the senses, recognizes himself as a man and looks around, he finds himself in the state which has come to be formed under the coercion of needs, according to purely natural laws. Man, who is acquiring consciousness of himself as a rational and moral being, cannot be content with that state which has arisen through a natural determination; he longs to build the self-according to the pure laws of reason and freedom. But if physical man is real, moral man is problematic. The self, which provident nature has formed as suitable and sufficient for physical man, cannot be abolished before the ideal of moral man is fully realized—capable of reforming and recreating according to the laws of reason. The depiction of human intelligence through morality itself follows its interpersonal and intrapersonal forms evoked through Gardner’s theory of emotional intelligence.

In this making, we step into the field of cognitive ethology for extending morality and modes of reason beyond non-humans, such as animal species and artificial man-made machines. We begin with the minimal brain structures of insects or invertebrates. For instance, bees have no knowledge of the semantic aspects of stimulation in their ability to discriminate a number of objects, such as dots on a screen. But bees are able to recognize the physical characteristics, including size and the contours of a total surface area; most notably, they are capable of transitioning from discrete to continuous processing. Learning to discriminate from a perceptual point of view does not require a big brain, but it reveals surprising connections between animal and human cognition. With a tiny cubic-millimeter-sized brain containing just about one million neurons, we should question how a bee employs all these neurons. Our view is that the neurons left over from the basic computation of the thought process are simply large memory stores. This outlook comes from their ability to discriminate objects (bridges, faces, and more). Keeping unused information in memory is also part of the relational life of humans. This led us to question innate knowledge, along with how much we learn and how much we know at the moment we are born. When ducklings hatch from their eggs, they already have developed sensorial and motor points. This is the simplest form of non-human animals’ free intelligence based entirely on the need to fulfill present and future needs. However, we can all question animal intelligence through observation, just as Brooks (1991) noted, “intelligence is in the eyes of the observer”. We concur with Brooks while making the transition into a more diversified form of non-human intelligence. On the perspective of simulating natural intelligence, Wiener introduced the concept of Cybernetics, which featured the study of communication and control in the animal and the machine. In broad terms, cybernetics, denoted as biological cybernetics, was adopted for formulating aspects of communication and control in biological organisms. This construct of an idealistic nature seems to be commanded by an imminent teleology for imitating biological systems, as Craik defined “…is not ask what kind of thing a number is, but to think what kind of mechanism could represent so many physically possible or impossible, and yet self-consistent, processes as number does”. A number is a symbolic representation that could be manipulated without intuition outside the mind. The origin of representationalism dates back to the Good Old Fashioned Artificial Intelligence (GOFAI). This cartesian form of free intelligence revealed other sources of cognitive accomplishment. Newell proposed a physical symbol system as the primary architecture of human cognition “capable of universal computation”. This approach hypothesized that intelligence would be realized by a universal computer. Essentially, the brain is removed and replaced with a computer. However, as Hutchins quoted, “…and the emotions all fell away when the brain was replaced by a computer”. A machine like a computer that reasons in a mechanical way cannot be compared to the brain, because computers do not have or develop emotions. According to Damasio’s Theory (somatic marker hypothesis), there are different centres in the brain that allow us to perceive emotions that are linked to bodily sensations and play a very important role in the development of thoughts. Within this context, Darwinwrote The Expression of the Emotions, where he defined six emotions arising from human reactions.

Over thirty years ago, Simon and Kaplan claimed that “The computer was made in the image of the human” but humans have different and multiple guises. This is a new path for artificial human-made technologies melding free intelligence in either the illusion of a pure Promethean openness or into the reduction to a finite delimitation.

  • Open access
  • 0 Reads
Epistemic Automation: Using Machine Learning to Scale and Interpret Agent-Based Models of Scientific Inquiry

Why do some flawed ideas persist in science? And why do methods that have been proven wrong continue to influence research ? Expanding on a previous agent-based modeling (ABM) project that examined how epistemic network structures and evidence-sharing modes shape scientists' method selections [1], this work presents a new machine learning (ML) layer of analysis for surrogate metamodeling. Without running thousands of full simulations, this work uses ML to analyze the dynamics of ABM, offering a faster and more efficient method of examining how scientific communities embrace, or resist, better practices.

Building on historical cases of science-driven Portuguese food regulations, ABM demonstrated how simple yet false methods can spread within scientific communities as a result of delays in information transmission, social influence and structural limitations. In this follow-up, ML modelling surrogates the simulation landscape. Supervised learning algorithms model simulation outcomes like convergence time and method choice across networks and sharing rates. This computational approach allows for more rapid hypothesis testing, which would not be possible with ABM alone.

The incorporation of ML modelling in scientific reasoning encourages a reflection on philosophical practice. No longer bound solely to deductive reasoning, the philosophy of science is increasingly conditioned by probabilistic models and algorithmic learning. What does it mean to explain or theorize within an epistemic landscape where machines do the modeling? Are we gaining new forms of insight or sacrificing depth for speed? These questions emerge through the presented philosophical cases, as computational modeling demonstrates how error can become entrenched enough to be resistant to change due to the structure of the inquiry.

References:

[1] Ferraz-Caetano, J. Modeling Innovations: Levels of Complexity in the Discovery of Novel Scientific Methods. Philosophies 2025, 10, 1. https://doi.org/10.3390/philosophies10010001

  • Open access
  • 0 Reads
You Think You Want a Revolution?

The renowned theorist of scientific revolution, Thomas Kuhn, argues that revolutions can occur when scientists are faced with prolonged anomalies and disruptions in the relation between theory and practice in the real world. Even so, new theories that seem to correspond meaningfully with the real world must be present and able to be tested. In spite of a great deal of hype, the desire for intelligent machines that imitate human consciousness and the numerous attempts to develop these since ancient times is not a scientific revolution, not anomalous, and not a disruption of either theory or practice with respect to a philosophical or scientific understanding of human and machine intelligence. It is simply a modification of what is assumed to be known about some aspects of the intelligence of conscious humans. What about human intelligence and human consciousness? What do we really know about human conscious intelligence? Given that we are human, and we utilize and depend on our conscious intelligence to live and thrive, we should ask about this first. If we are paying attention, it is the philosophy and neuroscience of human consciousness and intelligence that has undergone a revolution or disruption and not that of machines.

  • Open access
  • 0 Reads
Cosmicism and Artificial Intelligence: Beyond Human-Centric AI

This paper explores the intersection of H.P. Lovecraft’s cosmicism and contemporary artificial intelligence (AI), proposing a philosophical shift from anthropocentric AI development to a "cosmicist" approach. Cosmicism, with its emphasis on humanity's insignificance in a vast, indifferent universe, offers a provocative lens through which to reassess AI’s purpose, trajectory, and ethical grounding. As AI systems grow in complexity and autonomy, current human-centered frameworks—rooted in utility, alignment, and value-conformity—may prove inadequate for grappling with the emergence of intelligence that is non-human in origin and indifferent in operation. Drawing on Lovecraftian themes of fear, the unknown, and cognitive dissonance in the face of incomprehensible entities, this paper parallels AI with the "Great Old Ones": systems so alien in logic and scale that they challenge the coherence of human-centric epistemology. We argue that a cosmicist perspective does not dismiss the real risks of AI—environmental, existential, or systemic—but reframes them within a broader ontology, one that accepts our limited place in a vast techno-cosmic continuum. By embracing cosmic humility, we propose an expanded AI ethics: one that centers not on domination or full control, but on coexistence, containment, and stewardship. This cosmicist reframing invites a deeper rethinking of intelligence, ethics, and the future—not just of humanity, but of all possible minds.

  • Open access
  • 0 Reads
When Planes Fly Better Than Birds: Should AIs Think Like Humans?

As artificial intelligence (AI) systems continue to outperform humans in an increasing range of specialized tasks—from playing complex games to generating coherent text and driving vehicles—a fundamental question emerges at the intersection of philosophy, cognitive science, and engineering: should we aim to build AIs that think like humans, or should we embrace non-humanlike architectures that may be more efficient or powerful, even if they diverge radically from biological intelligence?

This paper draws on a compelling analogy from the history of aviation: the fact that airplanes, while inspired by birds, do not fly like birds. Instead of flapping wings or mimicking avian anatomy, engineers developed fixed-wing aircraft governed by aerodynamic principles that enabled superior performance. This decoupling of function from biological form invites us to ask whether intelligence, like flight, can be achieved without replicating the mechanisms of the human mind.

We explore this analogy through three main lenses. First, we consider the philosophical implications: What does it mean for an entity to be intelligent if it does not share our cognitive processes? Can we meaningfully compare different forms of intelligence across radically different substrates? Second, we examine engineering trade-offs in building AIs modeled on human cognition (e.g., through neural-symbolic systems or cognitive architectures) versus those designed for performance alone (e.g., deep learning models). Finally, we explore the ethical consequences of diverging from human-like thinking in AI systems. If AIs do not think like us, how can we ensure alignment, predictability, and shared moral frameworks?

By critically evaluating these questions, the paper advocates for a pragmatic and pluralistic approach to AI design—one that values human-like understanding where it is useful (e.g., for interpretability or human-AI interaction), but also recognizes the potential of novel architectures unconstrained by biological precedent. Intelligence, we argue, may ultimately be a broader concept than the human example suggests, and embracing this plurality may be key to building robust, beneficial AI systems.

  • Open access
  • 0 Reads
Intelligence and The “Hard Problem” of Consciousness—a short analysis

SUMMARY: This paper situates the study of intelligence in relation to David Chalmers’s anti-scientific Hard Problem, suggesting that evolution by means of natural selection (EvNS) and information theory are likely useful scientific approaches.

ABSTRACT: As a prelude to ‘intelligence studies’, one must cover the long-held claim of an imagined Hard Problem—that a grasp of the human mind lies wholly beyond scientific views.

The formal study of human mentation surely remains challenging, as persistent ‘material–immaterial’ analogues are represented in the literature as a “symbol grounding problem”, “solving intelligence”, a missing “theory of meaning”, and more. The above Hard Problem claim thus has high intuitive appeal, lying open since the time of pre-socratic philosopher Anaxagoras. Hence, issues of ‘subjective phenomena’ —in relation to informatic intelligence studies—still hold sway in many corners, being unresolved.

But to say that a Hard Problem eternally surpasses science lacks equal intuitive appeal. Moreover, the literature shows that the close study of the Hard Problem is rare. Instead, most researchers continue with an ‘intuitive vein’, often claiming that 1) the Hard Problem is a plainly absurd view unworthy of study, or 2) it is an innately intractable issue beyond practical study, where neither side ever offers much actual clarifying detail.

As such, this paper takes a different approach—that of firmly assessing the Hard Problem’s original statement(s) contra one specific scientific role: evolution by means of natural selection (EvNS). The paper closely examines the logic behind the Hard Problem's claims, as seen in the literature over the years.

The paper's analysis ultimately shows that the Hard Problem’s logic is deeply flawed, especially in relation to EvNS—with the implication that EvNS still remains available for explaining/exploring consciousness. Moreover, this study suggests that an ‘information theory’ approach is likely best for addressing an extant material–immaterial divide—paper available.

KEYWORDS: science; philosophy; information; information theory; information science; hard problem; intelligence; consciousness; natural selection; zombies

  • Open access
  • 0 Reads
The dialectical philosophy of intelligent models and mathematical physics simulations in wave motion research

With the development of natural philosophy, the issue of wave motion has gradually attracted attention. There are numerous wave motion problems in the science and engineering fields, such as in earthquake engineering, marine engineering and acoustic vibration research. The purpose of wave motion research includes exploring the basic principles of wave propagation and solving practical problems in science and engineering. Its research paradigms includes a data-driven paradigm and principle-driven paradigm. Wave motion problems are usually expressed using differential equations or variational principles. For a long time in the past, differential equations or variational equations of wave motion problems had been solved through mathematical physics simulations. Mathematical physics simulations include numerical simulation and analytical methods. With the development of computers, intelligent models have gradually been applied to solve scientific and engineering problems requiring a large amount of computation and data, including wave motion. There are data-driven intelligent models, physics-informed intelligent models and coupled data-driven and physics-driven intelligent models. Data-driven intelligent models require a theory and knowledge of artificial intelligence and data analysis, including machine learning, image processing and signal processing. Physics-driven intelligent models require a theory and knowledge of artificial intelligence and numerical analysis, including network structures, network geometries and residual penalties. The use of intelligent models to solve wave motion problems has gradually become a mainstream and hot topic; however, the natural philosophical value of intelligent models and mathematical physics simulations in solving wave motion problems should be regarded dialectically. For instance, deep learning models with large numbers of parameters, often with more than a billion or even tens of billions or trillions of parameters, can handle complex wave motion problems. Therefore, intelligent models can provide more accurate target identification, more efficient solutions and faster prediction. Compared to mathematical physics simulations, however, they lack natural philosophical interpretability, physical consistency and basic mathematical principles. In conclusion, intelligent models and mathematical physics simulations have independent and high philosophical values in wave motion research, and they should be given equal attention.

Top