The question of whether artificial systems can possess intentionality—the ability to be "about" something—is a central issue in the philosophy of mind and cognitive science. Traditional theories offer different explanations: representationalism sees intentionality as a matter of internal symbols, functionalism focuses on causal roles, and enactivism ties it to embodied interaction. However, each of these approaches has its limitations. Representational theories struggle with the symbol grounding problem, functionalism risks reducing intentionality to mere input–output processing, and enactivism may be too restrictive in limiting intentionality to biological organisms.
Cybernetics provides a fresh perspective by redefining intentionality as a regulatory function rather than a representational property. In this view, intentionality emerges from the way autonomous systems regulate themselves through feedback loops and homeostasis. Instead of being about static internal representations, it is about dynamic adaptation—how a system maintains stability and achieves its goals within a changing environment. By this definition, even simple artificial systems that adjust their behavior based on internal states and external conditions can exhibit a minimal form of intentionality.
This presentation explores how cybernetic principles can reshape our understanding of intentionality in AI. Can artificial systems develop intentionality through adaptive regulation, even without consciousness? If so, what conditions would need to be met for AI to be considered truly intentional in a way comparable to living beings? These questions will be examined by integrating insights from the philosophy of mind, cognitive science, and systems theory.
By moving away from static, symbol-based models and toward a more dynamic, process-oriented view, this approach could lead to a more nuanced understanding of intentionality, one that bridges the gap between human cognition and artificial intelligence. Ultimately, this perspective may offer new ways of designing AI systems that are not only functionally competent but also capable of self-directed regulation, a key feature of intelligent agency.