The rapid advancement of technology has led to the pervasive presence of electronic devices in our lives, enabling convenience and connectivity. Cryptography offers solutions, but vulnerabilities persist due to physical attacks like malware. This led to the emergence of Physical Unclonable Functions (PUFs). PUFs leverage inherent disorder in physical systems to generate unique responses to challenges. Strong PUFs, susceptible to modeling attacks, can be predicted by malicious parties using machine learning and algebraic techniques. Weak PUFs, with minimal challenges, face similar threats if built upon strong PUFs. Despite some weaknesses, PUFs serve as security components in various protocols. Modeling attacks' success depends on suitable models and machine learning algorithms. Logistic Regression and Random Forest Classifier are potent in this context. Deep Learning Techniques, including Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs), exhibit promise, particularly in one-dimensional data scenarios. Experimental results indicate CNN's superiority, achieving precision, recall, and accuracy exceeding 90%, demonstrating its effectiveness in breaking PUF security. This signifies the potential of deep learning techniques in breaking PUF security. In conclusion, the paper highlights the urgent need for improved security measures in the face of evolving technology. It proposes the utilization of deep learning techniques, particularly CNNs, to strengthen the security of PUFs against modeling attacks. The presented findings underscore the critical importance of reevaluating PUF security protocols in the era of ever-advancing technological threats.
Previous Article in event
Previous Article in session
Next Article in event
PUF Modeling Attacks using Deep Learning and Machine Learning Algorithms.
Published:
09 November 2023
by MDPI
in The 4th International Electronic Conference on Applied Sciences
session Computing and Artificial Intelligence
Abstract:
Keywords: PUFs; Security; Cyber-Security; challenge-response data; deep learning; modeling attacks