Please login first
GravSpike: A Neuro-Inspired Gravitational Preprocessing Framework for Abstractive Summarization of Long Documents
* 1 , 2 , 1
1  Department of Computer Science, Faculty of Computing, Northwest University, Kano, PMB 3099, Kano, Nigeria
2  Department of Software Engineering, Northwest University, Kano, PMB 3099, Kano, Nigeria
Academic Editor: Lucia Billeci

Abstract:

Transformer-based models struggle with long-document summarization due to fixed input length constraints. To mitigate this issue, hybrid approaches typically perform an extractive preprocessing step, selecting salient sentences as input to an abstractive summarization model. However, most unsupervised extractive methods, such as TextRank and LexRank, rely on shallow heuristics and fail to preserve semantic coherence or minimize redundancy. We propose GravSpike, a neuro-inspired preprocessing framework for extractive–abstractive summarization. GravSpike integrates SBERT-based sentence embeddings with a gravitational ranking model that scores sentences based on lexical salience, positional weight, and semantic proximity, modeled using the gravitational force equation. To further enhance content diversity and reduce redundancy, we introduce a spiking neuron-inspired filtering mechanism that iteratively activates informative sentences based on adaptive firing thresholds. A multi-objective Ant Colony Optimization (ACO) algorithm then selects an optimal subset, balancing ROUGE-based relevance and SBERT-based semantic cohesion. We evaluate GravSpike on three long-document datasets, BillSum, PubMed, and arXiv, by comparing abstractive summaries generated by BART and T5 with and without GravSpike preprocessing. Experimental results show that GravSpike-enhanced inputs consistently yield higher ROUGE-1, ROUGE-2, and ROUGE-L scores than the same models applied directly to truncated or full-length documents. On the BillSum dataset, GravSpike achieves ROUGE-1, ROUGE-2, and ROUGE-L scores of 58.83, 37.63, and 44.47, respectively (p < 0.01). These findings demonstrate GravSpike’s effectiveness as a modular, unsupervised filtering pipeline that significantly improves the performance of large language models on long-form summarization tasks.

Keywords: Abstractive Summarization; Extractive Summarization; Neuro-Inspired Computing; Gravitational Ranking; Spiking Neural Model; Ant Colony Optimization; Long Document Summarization
Comments on this paper
Currently there are no comments available.


 
 
Top