Transient noise artifacts, or glitches, pose a serious challenge for gravitational-wave astronomy. These artifacts can overlap with real gravitational-wave signals and lead to inaccurate parameter estimation. As existing detectors are upgraded and new detectors come online, the ability to accurately identify and classify glitches will become increasingly important. While there are existing methods for handling glitches, many are computationally expensive, and as data volumes grow, the cost of removing or mitigating these glitches increases significantly.
This motivates the following question: can machine learning serve as an effective tool for glitch classification and mitigation? Using labeled data from Gravity Spy, we expand and balance our dataset using generative adversarial networks (GANs). We then evaluate a variety of convolutional neural networks (CNNs) and vision transformer architectures to assess their performance in classifying gravitational-wave glitches. Hyperparameters such as learning rate and weight decay are optimized using Optuna to achieve optimal performance. Model performance is evaluated using standard classification metrics, including accuracy, precision, recall, F1 score, and confusion matrices.
By comparing a diverse set of models, we examine the trade-offs between model complexity, interpretability, and classification performance. These results provide guidance for selecting appropriate machine learning models for future gravitational-wave detector characterization pipelines, while doing so at a computationally affordable cost.
