Please login first
A Decision Framework to Select Robotics Simulators for Automation and Control Tasks: Criteria and Validation
1 , * 1, 2 , 3 , 1
1  proMetheus, Higher School of Technology and Management, Polytechnic Institute of Viana do Castelo (IPVC), Rua Escola Industrial e Comercial de Nun’Álvares, 4900-347, Viana do Castelo, Portugal.
2  Centre for Mechanical Technology and Automation (TEMA), Department of Mechanical Engineering, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal.
3  ADiT-Lab—Applied Digital Transformation Laboratory, Higher School of Technology and Management, Polytechnic Institute of Viana do Castelo (IPVC), Rua Escola Industrial e Comercial de Nun’Álvares, 4900-347, Viana do Castelo, Portugal
Academic Editor: Antonio J. Marques Cardoso

Abstract:

Robotics simulation is a key enabler for automation and control development, allowing safer experimentation, faster design iterations, and reduced development cost. Nevertheless, the current simulator ecosystem is highly fragmented, spanning open-source and commercial tools with different levels of physical fidelity, performance, and ecosystem integration. As a result, simulator selection is frequently driven by familiarity or availability rather than explicit task requirements, often leading to suboptimal engineering workflows. This paper proposes a task-oriented decision framework to support reproducible and transparent selection of robotics simulators based on a fixed and structured set of evaluation criteria. These criteria cover: (i) physical fidelity and contact modeling; (ii) sensor modeling and visual realism; (iii) performance and scalability aspects, including headless execution, parallelism, and GPU acceleration; (iv) ecosystem integration with automation, control, and learning pipelines, including ROS/ROS 2 compatibility; (v) extensibility and programmability; and (vi) practical constraints such as hardware requirements, licensing models, and learning curve. The framework is operationalized through a checklist and scoring matrix guided by four key questions addressing the target task, fidelity-versus-speed priorities, target software stack, and sim-to-real transfer requirements. To validate feasibility in a representative engineering workflow, a URDF-based modeling and simulation pipeline is implemented and used to compare Gazebo, as an open-source physics-based simulator, against MATLAB/Simulink, representing a commercial model-based simulation environment. The comparison reports practical indicators, including setup effort, integration complexity, computational requirements, and runtime behavior, for representative motion and sensing scenarios. The results highlight consistent trade-offs across different user profiles and application needs, while also revealing open gaps in the field, notably the lack of unified multi-task benchmarks and joint metrics capable of simultaneously capturing simulation fidelity, computational performance, and sim-to-real transfer effectiveness.

Keywords: robotics simulation; simulator selection; MATLAB/Simulink; Gazebo; physics engines; automation and control; URDF workflow; sim-to-real.
Top