Why do some flawed ideas persist in science? And why do methods that have been proven wrong continue to influence research ? Expanding on a previous agent-based modeling (ABM) project that examined how epistemic network structures and evidence-sharing modes shape scientists' method selections [1], this work presents a new machine learning (ML) layer of analysis for surrogate metamodeling. Without running thousands of full simulations, this work uses ML to analyze the dynamics of ABM, offering a faster and more efficient method of examining how scientific communities embrace, or resist, better practices.
Building on historical cases of science-driven Portuguese food regulations, ABM demonstrated how simple yet false methods can spread within scientific communities as a result of delays in information transmission, social influence and structural limitations. In this follow-up, ML modelling surrogates the simulation landscape. Supervised learning algorithms model simulation outcomes like convergence time and method choice across networks and sharing rates. This computational approach allows for more rapid hypothesis testing, which would not be possible with ABM alone.
The incorporation of ML modelling in scientific reasoning encourages a reflection on philosophical practice. No longer bound solely to deductive reasoning, the philosophy of science is increasingly conditioned by probabilistic models and algorithmic learning. What does it mean to explain or theorize within an epistemic landscape where machines do the modeling? Are we gaining new forms of insight or sacrificing depth for speed? These questions emerge through the presented philosophical cases, as computational modeling demonstrates how error can become entrenched enough to be resistant to change due to the structure of the inquiry.
References:
[1] Ferraz-Caetano, J. Modeling Innovations: Levels of Complexity in the Discovery of Novel Scientific Methods. Philosophies 2025, 10, 1. https://doi.org/10.3390/philosophies10010001