Recent advances in the use of military robots in combat raise serious ethical questions. Military robots have expanded their role in combat, from simple reconnaissance and surveillance to deadly strikes on enemy positions. With the advancement of military robotics technology, the military continues to push for greater autonomy of military robots to reduce the cost of operation and maintenance of military robots. However, as military robots begin to make decisions on their own, the moral responsibilities of military robots become blurred.
If a drone mistakenly destroys a school instead of the right target, who is responsible? Therefore, how to design an ethical military robot? How to reasonably hold a military robot accountable for manslaughter? How the state, society, and even individuals respond to the challenge of military robots will become particularly important and urgent.
This article is divided into five parts: Section 1 provides a background introduction to the ethical design and accountability of military robots. Section 2 summarizes the current status and position of military robots and future development trends, the advantages and disadvantages of military robots compared with humans, the uniqueness of military robot ethics, the difficulty of military robot ethical design, and the dilemma of the accountability of military robots. Section 3 focuses on describing how to design ethical algorithms for military robots. Robotics Ethics outlines and justifies an approach for crafting and assessing ethical algorithms utilized in autonomous machines, including self-driving vehicles and military rescue robots. Derek Leben contends that the evaluation of these algorithms should hinge on their effectiveness in fostering cooperation among self-interested entities. Based on the fact that moral judgments are the product of evolutionary pressures for cooperative behavior in self-interested organisms, this paper compares the results of the application of Rawls' contractualism with the application of utilitarianism, free-willism, deontology, and other moral theories to various dilemma games, and argues that only contractualism can produce Pareto-optimality and cooperative behavior in a variety of dilemmas. The Leximin program, a refined version of the Maximin algorithm based on the contractualist difference principle, is suitable for application to a wide variety of decision spaces. Section 4 centers on the attribution of responsibility for military robots. This paper contends that the challenge of determining the accountability of autonomous robots can be resolved by situating it within the framework of the military chain of command. Decision-making in war is multi-layered, and the military hierarchy is a system that assigns responsibility and limits autonomy among different levels of decision-makers. Section 5 proposes specific countermeasures to address the ethical issues of military robotics. At the technological level, algorithms that continue to optimize the sensitivity of military robots to harm while continuing to increase the level of automation of military robots are an absolute measure to ensure the technological safety of military robots. At the policy and regulatory level, the transparency of military robotics research and development should be strengthened, along with the accountability of those involved.