Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Modeling Denialism as an Epistemic Game: Trust, Misinformation, and Strategic Ignorance

Denialism—the consistent dismissal of solid evidence or widespread agreement—represents a substantial obstacle to public discussions, scientific advancement, and shared decision-making. Although frequently viewed as an issue of misinformation or irrational behavior, this paper contends that denialism can be more effectively examined using the framework of epistemic game theory. Within this framework, the decisions of agents rely not solely on their own preferences but importantly on their perceptions of others’ knowledge, rationality, and probable behaviors. Denialists, specialists, and broader society engage in a complicated strategic landscape influenced by belief systems, information imbalances, and the presence or absence of a shared understanding. Denialism arises and persists not only due to ignorance but also as a logical reaction to perceived doubts regarding the reliability of information sources, the dependability of experts, and the convictions of others. Modeling these interactions as epistemic games allows for a formal analysis of how disruptions in common knowledge and changes in belief systems sustain denialist tactics, despite substantial evidence. This method highlights the philosophical aspects of denialism—posing inquiries regarding rationality, trust, and the essence of public reasoning—and indicates that successful interventions should address not only gaps in information but also the profound epistemic factors that influence collective belief construction and opposition to evidence.

  • Open access
  • 0 Reads
Stochastic punishment by authorized third parties in a public goods game: the role of reputation-based migration

This paper examines how delegating a third-party punishment can sustain cooperation in large-scale, mobile societies facing public goods dilemmas. Specifically, we explore how the effectiveness of delegated punishment changes in more realistic scenarios (e.g., free-riders are not equally likely to be punished)—when the sanctions are neither certain nor immediate. Additionally, we examine whether informal sanctions, such as reputation-based "voting with one's feet", can complement or substitute formal punishment mechanisms in promoting and maintaining cooperation, especially in the presence of mutant defectors.

We propose an agent-based model of authorized third-party punishment that is non-deterministic and non-immediate in unstructured populations (e.g., during the Great Migration). Three types of players—unconditional cooperators, unconditional defectors, and conditional cooperators—are matched in a one-shot public goods game, after which point their contributions are mapped onto reputation scores that accumulate continuously up to a fixed upper bound. The reputation scores guide players’ movements and determine the intensity of the punishment they receive. Agents update their strategies via payoff-based imitation to capture social learning dynamics. We assess the cooperation across scenarios defined by three factors—punishment type (none/deterministic/stochastic), migration rule (random vs. voting with one’s feet), and mutation status (presence vs. absence).

The experimental results demonstrate that reputation-based “voting with one’s feet” migration significantly enhances the cooperators’ spatial clustering and survival probability. This spatial clustering reduces the enforcement burden on third-party punishers, making it easier to sustain cooperation than under a random migration scenario. However, in the absence of any punishment, cooperation collapses rapidly regardless of migration mode. Without mutation, deterministic punishment maintains higher average contributions under both random and reputation-based migration. But under the reputation-based “voting with one’s feet” scenario, once mutation is introduced, deterministic punishment exhibits a lower tolerance for internal variations within the core cluster than that under stochastic punishment, causing the cooperation to collapse to a low level.

  • Open access
  • 0 Reads
Games with Costly Endogenous Separation

Introduction
Games with costly endogenous separation (GCESs) are repeated games where players have the option to leave their current partnership and form a new one with a random partner, incurring a cost in the process. Thus, in these games, partnerships may be broken not only due to causes that are not related to the players’ choices (exogenous separation), but also due to players’ decisions (endogenous separation). We extend the framework of symmetric two-player games with endogenous separation by adding costs to the separation process and studying its influence on the existence of stable equilibria.


Methods
We study the existence of Nash and Neutrally Stable (NS) equilibria in the general case of symmetric two-player games with costly endogenous separation, with applications to the GCES version of the prisoner’s dilemma and the hawk–dove game.

Results
Fully cooperative strategies can support an NS equilibrium in both the prisoner’s dilemma and the hawk–dove game when the cost of endogenous separation is large enough. However, in order to achieve a fully cooperative equilibrium, a considerably large cost is needed.

Conclusions
GCESs may provide a good approach to modelling many real scenarios with freedom of association and search costs and can help explain cooperative or partially cooperative equilibria in social dilemmas.

  • Open access
  • 0 Reads
From Q-Learning to Quantum Models: The Evolution of Game Theory through Artificial Intelligence
,

The convergence of artificial intelligence (AI) and game theory has opened up transformative possibilities regarding how systems adapt and interact in dynamic strategic environments. This paper presents a comprehensive summary of current developments where deep learning, reinforcement learning, and optimization algorithms have redefined traditional game-theoretic models. AI has now enabled adaptive decision-making in both competitive and cooperative scenarios, such as cybersecurity and autonomous driving, by combining knowledge from neural-network-based strategy formulation and multi-agent reinforcement learning. Special focus is employed with regard to the implementation of deep Q-networks and Q-learning, which allow agents to create ideal strategies through iterative self-learning in uncertain and complex situations. This study also highlights the ethical concerns, transparency, and bias present in strategies created by AI. By conducting algorithmic evaluations using tools such as TensorFlow and OpenAI Gym, the practical feasibility of these methods has been proven in various game scenarios. Despite challenges in scalability, interpretability, and incomplete information, the results confirm that AI methods not only enhance strategic modeling but also push the boundaries of what autonomous systems can achieve in interactive decision-making. This paper aims to contribute a combined understanding of how AI is revolutionizing game theory and outlines future research directions involving hybrid learning models, self-organizing systems, and quantum strategies, for more intelligent, ethical, and effective game-based decision systems.

  • Open access
  • 0 Reads
Relaxing Participation Constraints in Cooperative Game Approach to Village–Company Agreements

This study builds upon a cooperative game framework based on the Cartesian product of two sets, as introduced by M. Slime et al., by proposing a more flexible participation model. In one of their previous studies published in 2024, the authors assumed that all players must participate simultaneously in both games. Here, we relax this constraint by allowing each player to independently decide whether to participate in one, both, or neither of the games. This generalization better reflects real-world situations in multi-agent systems, such as service provision agreements involving agents with differing preferences or capabilities.

We redefine coalition structures and the associated payoff functions within this extended framework and investigate how classical solution concepts—specifically the core and the Shapley value—can be adapted to this setting. While the relaxation introduces additional complexity to the analysis, it enables a richer and more inclusive model of partial cooperation. An illustrative example is provided, and conditions are identified under which desirable properties are achieved.

  • Open access
  • 0 Reads
Mistakes versus Preferences in Economic Games
,

In this paper, we study the extent to which observed deviations from normative principles of behavior in economic games are due to mistakes that people would like to correct. To this end, we conduct two separate experiments: one in the context of prisoner’s dilemma games and one in the context of public goods games. In both experiments, our participants evaluate two rules to make decisions that could be followed in such games. Rule 1 implies always implementing dominant strategies if there are any, which is arguably the strongest principle of rational behavior in game theory. Rule 2 involves implementing certain strategies that are superior in terms of payoffs for both players, allowing for more efficient outcomes. This rule can be intuitively appealing, but it does not correspond to any fundamental normative principles of game theory and can lead to violations of dominance. In this setting, participants do three things: 1) they decide whether they want to implement each rule to make decisions for them, 2) they play games with various payoffs, and 3) they are presented with all the inconsistencies between their rule preferences and their decisions in the games and are asked to reconsider them. Changes in game decisions that are made to align with Rule 1 (the normative rule) are considered indicative of mistakes. Decisions that still deviate from dominance after reconsideration are considered to be deviations due to preferences. In prisoner’s dilemma games, 31% of the initial choices violate dominance, and mistakes account for 21% of these deviations. In public goods games, 77% of the initial choices violate dominance, and mistakes account for 4% of them. While a small proportion of deviations from normative behavior can be explained by mistakes, the majority cannot, suggesting that systematic behavioral principles primarily drive deviations from the normative standard.

  • Open access
  • 0 Reads
Strategic Evolution in a Dual-Game Framework: An Agent-Based Model of Inequality and Cooperation

Introduction

This study uses agent-based modeling to explore the effects of learning mechanisms and population scaling on inequality and cooperation within a framework combining the Ultimatum Game (UG) with the Public Goods Game (PGG). Initially, agents participate in UG sessions, where their accumulated payoffs determine their initial endowments for subsequent PGG sessions. Agent behavior—reflecting traits like fairness, selfishness, and cooperativeness—is based on experimental data from a lab-in-the-field study integrating both games. This research aims to understand how different learning dynamics and structural conditions affect strategy evolution and outcome distribution.

Methods

We conducted several agent-based simulations to replicate a real-world experimental design. In the first four simulations, 182 agents played 30 rounds of the UG followed by 30 rounds of the PGG, starting with fixed strategies and a marginal per capita return (MPCR) of 0.3. We later introduced a higher MPCR, medium-level learning, and imitative learning. A final simulation expanded the imitative learning model to 500 agents over 100 rounds. The results showed that increasing the MPCR and implementing learning mechanisms enhanced cooperation and reduced inequality in the PGG. Imitative learning notably yielded the most significant improvements.

Results

The larger final simulation, with extended interactions, resulted in the highest PGG payoffs and the lowest inequality levels. While payoffs from the UG remained stable, inequality decreased as learning fostered fairer strategies. Inferential tests highlighted significant differences in decision-making across simulations, underscoring the role of behavioral adaptation.

Conclusions

By integrating the UG with the PGG, this study clarifies how initial bargaining influences subsequent cooperation. Overall, learning mechanisms, especially imitation, and prolonged interactions promote successful strategies, encouraging fairness and enhancing cooperative outcomes.

  • Open access
  • 0 Reads
On the 3-interval Ulam-Renyi game with 3 lies

We consider a variant of the classical 20 question game with lies (also known as
the Ulam-R\' enyi game).
There are two players: Paul (the Questioner) and Carole (the Oracle).
Carole selects a number $x\in U=\{0,1,\dots,2^m-1\}$ without revealing it to Paul.
Paul asks membership questions of the form "Is $x\in S$?", where $S\subseteq U$.
Carole's aim is to maximize the number of questions Paul must ask.
She is allowed to lie up to $e$ times.
Paul wins if, after $q$ questions, exactly one value of $x$ remains possible; otherwise, Carole wins.
It is known [1] that Carole can win whenever $q<Q_{min}(2^m,e)$, where
$Q_{min}(2^m,e)=\min\{q~|~ 2^{q-m}\ge \sum_{i=0}^e {q\choose i}\}$.
We study the $k$-interval Ulam-R\' enyi game where each query set $S$ is the union of at most $k$ intervals.

Recently it was shown [2] that Paul can win the game with $e=3$ lies and $k=4$ intervals,
using $Q_{min}(2^m,e)$ questions for sufficiently large $m$.
In this paper, we show that three intervals suffice for Paul to win when $m=O(1)$.
It remains an open problem whether Paul can win using three-interval sets for all values of $m$.


References.

[1] E.~R. Berlekamp. Block coding for the binary symmetric channel with noiseless,
delayless feedback. In {\em Error-Correcting Codes}, pp. 61--68. Wiley, 1968.

[2] F.~Cicalese and M.~Rossi. On the multi-interval {U}lam-{R}{\'{e}}nyi game: For 3 lies 4
intervals suffice. {\em Theor. Comput. Sci.}, 809:339--356, 2020.

  • Open access
  • 0 Reads
Sex Differences in Cooperation and Costly Punishment under High Emotional Stress: An Experimental Study
, , ,

Introduction: Prosocial behavior, a cornerstone of human nature, has been extensively studied for over a century due to its key role in our species’s evolution. Emotions are central to fostering cooperation and regulating prosocial actions. This study investigates the emotional component of economic behavior and sex differences in decision-making under high emotional stress.

Methods: We conducted an experiment in Moscow (N=172) using four economic games with real monetary payoffs: a Trust Game (TG), Prosocial Punishment Game (PPG), Prisoner’s Dilemma (PD), Ultimatum Game (UG). One-shot decisions were made sequentially in each game. The participants interacted face-to-face with an interaction partner (“actor”) of the same sex across all four games. The actors made decisions according to an identical scheme, independent of the participants’ decisions. Specifically, in the initial TG, all the participants who trusted their partner (“actor”) lost all their funds due to their partner’s selfish decision.

Results: Most participants hesitated to impose a costly punishment for selfish behavior. However, 34.4% were willing to apply a costly punishment and confront an unfair partner directly, whereas 16.6% employed an extremely punishing strategy, spending all their available funds to completely eliminate the selfish partner’s capital. Women were more likely to employ prosocial punishment than men. In the PD, 66.3% of women defected, given earlier unfair behavior by their partner, while 53.1% of men still cooperated. Women who had used extreme prosocial punishment in the PPG were more prone to offering equal splits in the UG, a behavioral pattern not observed in men. All the effects discussed were statistically significant.

Conclusions: Our study indicates that modern Moscow youth exhibit sex-specific responses to the emotional aspects of economic interactions, in particular, when confronting selfish or unfair behavior. Women exhibited heightened sensitivity to the negative emotions elicited by unfair actions. Women also demonstrated a potential connection between emotions triggered by selfish behavior and an individual’s sense of fairness.

  • Open access
  • 0 Reads
Algorithmic collusion under competitive design

In this article, I study a model in which two companies use Q-learning algorithms to repeatedly
play a game from a class including first/second price auctions, Bertrand competition and pris-
oner’s dilemma. Previous papers have highlighted the tendency of these algorithms to behave
collusively without any instruction to do so (algorithmic collusion), depending on their param-
eterization. Unlike previous papers in the literature, I assume that the companies are free to
select the parameters for their algorithms strategically. Thus, in a first stage, the companies
simultaneously select parameters for their algorithms, then in a second stage the algorithms
repeatedly play the game (for example an automatized auction for advertisement spots) and
the designers collect the limiting payoffs. I show that, under the (mere) assumption that the
companies’ machines have limited capacities, any Nash equilibrium of the aforementioned
(two-stage) game features some algorithmic collusion (Theorem 1). In other terms, letting the
margin of competition move to the design of the algorithms is not sufficient to avoid algorith-
mic collusion. In the second part, I use extensive numerical simulations to study a restriction to
this model to a repeated prisoner’s dilemma played by ε-greedy algorithms. Their results reveal
the strategic role of exploration levels and how it hampers collusion in equilibrium.

1 2 3 4 5
Top