Please login first
Axiology and the Evolution of Ethics in the Age of AI
1  Computer Science and Engineering, Mälardalens University
Academic Editor: Marcin Schroeder

Abstract:

Artificial intelligence (AI), particularly autonomous systems, challenges traditional ethical frameworks by reshaping human values, agency, and responsibility. This paper argues that axiology—the philosophical study of values—offers a critical foundation for AI ethics by accommodating the dynamic relationship between technology and morality. Unlike rigid ethical theories, axiology provides an adaptive approach to algorithmic bias, depersonalized healthcare, and AI-mediated governance.

We propose an integrative axiological model that synthesizes deontological, utilitarian, and virtue ethics to ensure that AI aligns with pluralistic human values. This framework balances duty (transparency, fairness), outcomes (social good, efficiency), and virtue (human dignity, trust), akin to multicriteria decision analysis (MCDA) (Sapienza, Dodig-Crnkovic, Crnkovic, 2016), which systematically evaluates competing priorities in complex decision-making. For example, while utilitarianism might favor AI’s cost-saving healthcare diagnostics, virtue ethics ensures patient autonomy remains central, and deontology requires transparency in algorithmic decisions. This synthesis prevents AI from privileging one value (e.g., efficiency) at the expense of others (e.g., privacy) but relies on multicriterion-based decisions.

Case studies illustrate AI’s dual impact: In education, AI-powered learning enhances accessibility but may risk dehumanizing assessment. In healthcare, AI-driven diagnostics improve accuracy, yet excessive reliance on AI may threaten patient trust if empathy is overlooked. In governance, AI improves transparency but may raise ethical concerns over surveillance and bias in policing. These examples underscore the need for an evolutionary ethics, where values shift alongside technological advances.

This model aligns with Digital Humanism, which resists reducing humans to data points, and Responsible AI, which prioritizes accountability. Together, they advocate for AI that enhances—not undermines—human dignity, equity, and democratic agency.

To prevent ethical stagnation, policymakers and developers may adopt this axiological lens, ensuring that AI evolves as a tool for societal flourishing rather than a destabilizing and depersonalized force. By focus on axiology, we reframe AI ethics as a living discipline—one that reconciles competing values and safeguards humanity’s moral commitments in an AI algorithmic age.

References

Sapienza, G., Dodig-Crnkovic, G. and Crnkovic, I. (2016) "Inclusion of Ethical Aspects in Multi-Criteria Decision Analysis". Proc. WICSA and CompArch conference. Decision Making in Software ARCHitecture (MARCH), 2016 1st International Workshop. Venice April 5-8, 2016. DOI: 10.1109/MARCH.2016.5, ISBN: 978-1-5090-2573-2. IEEE

Keywords: Axiology, Responsible AI, Digital Humanism, AI Ethics, Philosophy of Intelligence, Deontological-, Utilitarian-, Virtue- ethics, Multicriteria Decision Analysis

 
 
Top