Research suggests that large-scale human cooperation is driven by shared narratives that encode common beliefs and values. This study explored whether such narratives can similarly nudge large language model (LLM) agents toward collaboration through a (networked) finitely repeated public goods game after priming them with different stories.
Our method employed LLM agents playing repeated networked public goods games characterized by collective optimality, individual incentives, and iterative adaptation properties. We implemented two complementary variants: single-pool experiments used one shared pool to test the scaling effects across group sizes and robustness to defection, while multi-pool experiments introduced strategic complexity through overlapping pools. We manipulated the behavioral homogeneity through narrative priming, where agents received a story-based behavioral context via system prompts. The story corpus comprised eight cooperation-themed narratives emphasizing teamwork and collective benefits, plus four control conditions. Depending on the experimental condition, the agents either received identical stories (homogeneous condition) or randomly sampled distinct stories (heterogeneous condition).
The results demonstrate that story-based priming affects collaboration. Common stories improved collaboration and benefited all the participants, while priming using different stories reversed this effect, favoring self-interested agents. In homogeneous groups, using cooperation-themed stories achieved near-perfect collaboration scores, significantly outperforming the baseline controls. In heterogeneous groups, self-interested agents achieved the highest cumulative payoffs while cooperation-primed agents obtained the lowest returns. These patterns persisted across network sizes and structures in both single-pool and multi-pool architectures.
The findings reveal that narrative coherence among agents influences the viability of cooperation, with implications for multi-agent coordination and AI alignment in networked environments.
The code is available at https://github.com/storyagents25/story-agents.
