Large language models (LLMs), such as GPT-4.0 and Gemini have achieved excellent performance on natural-language tasks, and they also show high expectations for logical reasoning.However, in the intricate field of printed circuit board (PCB) routing, complex scenarios still largely depend on the expertise of experienced engineers, requiring considerable time and effort. The ability of large language models to handle logical problems highlights their potential for addressing PCB-routing challenges. This paper introduces an innovative approach leveraging few-shot learning and chain-of-thought prompting within LLMs to address this challenge, aiming to assist engineers in PCB design with minimal data input. By testing LLMs with a limited number of examples using zero-shot, one-shot and few-shot methods, we assess the models' performance and prove few-shot has the best effort, illustrating their potential to streamline design tasks. Furthermore, we explore fine-tuning techniques to enhance the effectiveness of the few-shot learning approach, to overcome the limitation of scarce real-world PCB cases, we employed code synthetic cases to fine-tune the model in place of actual PCB scenarios, ultimately improving the LLMs' capability to manage intricate routing tasks. The results validate the feasibility and effectiveness of this method, offering a promising avenue for reducing the manual burden in PCB design.
Previous Article in event
Previous Article in session
Next Article in event
Next Article in session
Applying Existing Large Language Models for PCB Routing
Published:
23 November 2024
by MDPI
in 2024 International Conference on Science and Engineering of Electronics (ICSEE'2024)
session Deep Learning and Data Analytics in Electronics
Abstract:
Keywords: LLMs; PCB-routing; few-shot; fine-tuning;