The optimization of a mechanical system is typically tackled via time-consuming heuristic approaches, in which numerical simulations and/or experimental tests are performed to verify the physical understanding of the problem and to tune the design ruling parameters. Both these aspects are proposed to be automatically handled by looking at the optimization task as a Markov decision process, in which states describe specific system configurations, and actions represent the modifications to the current design. The physics-based understanding of the problem is suggested to be exploited to constrain the set of possible modifications to the mechanical system. This formalization of the optimization process is applied to design the grading of an array of resonators for energy harvesting in sensor applications. Specifically, attention is paid to set the resonator heights, possibly removing resonators whenever convenient. Finite elements simulations have been exploited to evaluate the action effects and to inform the reinforcement learning agent. The proximal policy optimization algorithm, one of the latest proposed and most powerful policy gradient algorithms, has been employed to solve the Markov decision problem. The procedure is demonstrated able to automatically exploit the physical principles that guided past design attempts, finally leading to suboptimal configurations enhancing the mechanical system performance with respect to previously proposed configurations. The proposed framework is not limited to the application at hand, but it is generalizable to a large class of problems involving sensor design optimization.
Previous Article in event
Next Article in event
Next Article in session
Optimization of graded arrays of resonators for energy harvesting in sensors as a Markov decision process solved via reinforcement learning
Published:
01 November 2022
by MDPI
in 9th International Electronic Conference on Sensors and Applications
session Physical Sensors
Abstract:
Keywords: energy harvesting for sensors; metamaterials; reinforcement learning; Markov decision process