Monitoring live fences in agroforestry landscapes is crucial for understanding ecosystem connectivity and biodiversity conservation, yet traditional detection methods struggle with their complex spatial–spectral characteristics. Building upon our previous work on multi-stream deep learning for live fence detection, which achieved over 83% accuracy, we propose a novel approach integrating foundation models to enhance scene-level classification capabilities. Our framework combines specialized vegetation detection features with pre-trained visual knowledge through a dual-stream architecture while leveraging optimal spectral band configurations. The methodology utilizes NIR–Green–Blue bands and NDVI integration, enhanced by self-attention mechanisms for improved contextual understanding. We evaluated our approach using multi-temporal PlanetScope imagery from three distinct agroforestry sites in Ecuador, capturing both dry and rainy seasons. This research advances automated live fence monitoring by combining specialized spectral analysis with the robust feature learning capabilities of foundation models, offering an improved solution for sustainable landscape management. The proposed approach aims to enhance detection accuracy while maintaining computational efficiency and supporting practical applications in conservation planning and policy implementation.
Previous Article in event
Previous Article in session
Next Article in event
Enhancing Live Fence Detection through Foundation Model Integration: A Scene-Level Deep Learning Approach
Published:
25 March 2025
by MDPI
in International Conference on Advanced Remote Sensing (ICARS 2025)
session Big Data Analytics, Machine Learning, Cloud Computing and Artificial Intelligence
Abstract:
Keywords: Deep Learning; Foundation Models; Live Fences; Remote Sensing; Scene Classification; Agroforestry
Comments on this paper
Average rating of this article
(we haven't received any ratings yet, be the first one!)
Thank you for rating this paper
