Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 19 Reads
Mitigating Label Noise in Remote Sensing: A Pseudo-Labeling Method for Forest Classification with Sentinel-2

The accuracy of large-area forest mapping is often compromised by the label noise present in global land cover products like ESA WorldCover. This study introduces a robust semi-supervised framework designed to mitigate this issue by leveraging a small, trusted set of manually curated clean data to refine a large, noisy dataset.

Our approach employs a modified ResNet-18 architecture in a two-stage training process. First, the model is trained exclusively on the high-quality, manually labeled clean dataset. This initial "teacher" model is then used to generate high-confidence pseudo-labels for the extensive but noisy WorldCover data, effectively filtering and re-labeling uncertain or incorrect regions. In the second stage, the model is fine-tuned on a composite dataset containing both the original clean labels and the newly generated, reliable pseudo-labels. This strategy leverages the accuracy of the clean data to improve the utility of the noisy data, significantly enhancing model robustness and generalization. The methodology was tested using Sentinel-2 and Digital Elevation Model (DEM) data in a case study covering the diverse forest ecosystems of North Africa.

Our semi-supervised methodology demonstrated exceptional performance, achieving a final classification accuracy of 98.50% on a combined validation set. The initial training on clean data showed rapid convergence, underscoring the power of a high-quality seed dataset. This research offers a practical and highly effective strategy for improving land cover classification in any region where large, noisy datasets are available alongside limited high-quality ground truth, providing a scalable solution to support global conservation efforts.

  • Open access
  • 13 Reads
From Global Noise to Local Accuracy: An Abstaining Classifier Approach for Robust Forest Mapping with Noisy Global Data

Accurate forest mapping is frequently hindered by the label noise inherent in large-scale global land cover products and the scarcity of high-quality local ground-truth data. This paper presents a novel AI-driven framework that effectively addresses this challenge by synergistically leveraging the broad coverage of noisy global datasets with a small, trusted set of clean annotations. Our approach utilizes a DeepLabV3+ architecture with Sentinel-2 multispectral imagery and derived vegetation indices as input.

The core of our methodology lies in a hybrid data strategy and a specialized composite loss function. We employ strategic batch sampling to prioritize learning from the clean dataset (85% of each batch) while still benefiting from the contextual coverage of the noisy data. Our composite loss function, integrating Dice Loss, Categorical Focal Loss, and a Deep Abstaining Classifier (DAC) Loss, is explicitly designed to manage label noise and boundary uncertainty by empowering the model to abstain from predictions on highly uncertain pixels.

The framework's efficacy was validated on the complex forested landscapes of North Africa, a region under-represented in many global training datasets. Our model achieved a validation F1-score of 91% and an Intersection over Union (IoU) of 83% on the held-out clean data. Critically, it achieved a recall of 95.4%, substantially minimizing the omission of forest areas compared to a baseline U-Net. This work provides a reliable, scalable methodology for refining large-scale, imperfect datasets, offering a robust solution for forest monitoring in data-challenged regions worldwide.

  • Open access
  • 6 Reads
Long-Period Fiber Gratings coupled with Imprinted Biopolymers and Hydrogels for Advanced Biosensing Applications

Long-period fiber gratings (LPFGs) have emerged as highly sensitive optical platforms for biosensing due to their responsiveness to refractive index changes. However, achieving the necessary sensitivity and selectivity for detecting biomolecules in complex real samples remains challenging. To overcome this, natural and biomimetic biological recognition elements (BREs) have been integrated with LPFGs, with the key step being the formation of a selective biolayer tailored to the target analyte.

This work explores imprinted biopolymers novel synthetic antibody mimics produced via the spontaneous polymerization of endogenous neurotransmitters such as dopamine or serotonin in the presence of molecular templates. Using LPFG sensors, the polymerization kinetics and growth conditions of a serotonin-based imprintable biopolymer were monitored in both templated and non-templated conditions, demonstrating their potential as functional coatings for biosensing.

Additionally, LPFGs were employed to study the coupling of various polymers and hydrogels to develop functional layers for BRE immobilization. In particular, the photopolymerization of acrylamide into polyacrylamide hydrogels was characterized. These hydrogels offer a porous and reactive matrix for the immobilization of antibodies and aptamers, enabling the capture of bacteria and the investigation of their antibiotic resistance.

Finally, the integration of LPFG sensors with thermally stabilized, custom-designed microfluidic systems enabled the combined advantages of the investigated polymers and hydrogels. This synergy enhances the performance of LPFG-based biosensors while maintaining their inherent benefits versatility, low cost, and portability, thus presenting a promising approach for developing advanced, highly specific, and sensitive biosensing platforms.

  • Open access
  • 12 Reads
Use of Microbial Carriers in Anaerobic Digestion: Scientific and Research Aspects

The use of microbial carriers in anaerobic digestion (AD) has been recognised as a scientifically validated method to enhance microbial activity, process efficiency, and biogas production. Research focuses on natural and modified materials that support microbial colonisation, enzymatic activity, and process stability under varied conditions.

A promising approach is the silica/lignin system, combining the large surface area and inert nature of silica with the biochemically active components of lignin. This composite stimulates dehydrogenase activity and promotes microbial proliferation, increasing methane production during mesophilic digestion of sewage sludge.

Another effective carrier is the diatomaceous earth/peat (DEP) composite. Its porous structure and sorptive properties create favourable conditions for microbial consortia development, especially during digestion of food waste. DEP use results in enhanced enzymatic activity and stabilised process kinetics, even under fluctuating organic loads.

The chitosan/perlite (Ch/P) system offers a unique combination of biological functionality and structural support. Chitosan modulates microbial interactions and may reduce inhibitory effects, while perlite provides mechanical durability. This carrier improves substrate conversion efficiency and microbial retention.

Notably, next-generation sequencing (NGS) has played a crucial role in characterising microbial communities associated with these carriers. NGS revealed shifts in the abundance of key methanogenic and fermentative taxa, confirming the carriers’ influence on microbiome structure and function.

These findings highlight the importance of selecting appropriate microbial carriers to optimise AD processes. Continued research into such materials represents a vital step towards sustainable waste management and renewable energy production.

  • Open access
  • 10 Reads
PANI-coated Ni nanotubes for supercapacitors: influence of pore size of the applied template and fabrication process parameters

This work presents the fabrication and characterization of hybrid nanostructured electrodes – polyaniline supported on a nickel nanotube array – for use in supercapacitors. Ni nanotubes were prepared using a template, and the effects of the template and preparation parameters are discussed. First, an anodized aluminum oxide (AAO) template was grown in H3PO4 under the same applied field to obtain similar pore sizes and for two different time periods to obtain different AAO thicknesses. Nickel nanotubes (Ni NTs) were prepared chemically on different AAO templates, resulting in Ni NTs of the same diameter but different lengths. A thin Ni layer was then electrodeposited on top of the Ni NTs for use as a metal contact layer in the single electrode during electrochemical tests. Vertically oriented arrays of Ni NTs on a Ni layer were obtained after etching Al and AAO. The resulting Ni NT/Ni structures were used as a substrate for electrodeposition of a polyaniline (PANI) layer at two different applied voltages. The electrodeposition parameters were chosen for homogeneous polymer deposition. The microstructural characterization of the obtained structures was carried out using a scanning electron microscope, and individual Ni nanotubes with different lengths were easily observed. The vertically oriented Ni/Ni nanotube/PANI arrays were tested for electrochemical performance (using cyclic voltammetry and impedance spectroscopy) as individual electrodes for supercapacitors. The capacitance values strongly depended on the length of the tubes and the applied current during PANI deposition and showed the highest values of 325 F/cm2 at 10 mV/s.

  • Open access
  • 10 Reads
Artificial Intelligence as a Decision-Support Tool in Crisis Management

Artificial Intelligence (AI) is becoming a game-changing and increasingly important tool in various fields, as well in disaster management. However, its true value lies in how effectively decision-makers and public authorities are able to implement it in real-world practise operations. This paper addresses a gap between technological potential and practical application by combining insights from academic research, expert analyses, and an overview of the current state of AI integration in Slovakia. Despite the growing relevance of digital tools in crisis response, the integration of modern technologies into the national crisis management structure remains limited. Even in the recently announced reform of crisis management in Slovakia, there is a noticeable absence of concrete steps aimed at leveraging AI or other advanced technologies. Yet AI holds significant potential to enhance the efficiency and quality of decision-making, support faster and more effective responses, improve resource planning and risk assessment, and ultimately contribute to better crisis-related processes. The paper highlights key areas where AI can be implemented for decision support, including predictive data analytics, resource optimization, and strategic governance. At the same time, it discusses the risks associated with the use of AI in high-stakes environments, such as concerns over data quality and accountability. The findings underscore the need for targeted education and training of crisis responders and public officials to ensure AI tools are used responsibly and effectively. Strengthening human competencies is essential to ensure that AI serves as a reliable decision support, enhancing rather than replacing human judgment in crisis management.

  • Open access
  • 9 Reads
A clustering-enhanced explainable approach involving convolutional neural networks for predicting the compressive strength of lightweight aggregate concrete
, ,

Lightweight aggregate concrete (LWAC) is a practical alternative to conventional concrete in civil engineering, offering advantages such as reduced density, enhanced insulation properties, and improved seismic performance. However, segregation during compaction remains a limitation, potentially leading to non-uniform material distribution and decreased compressive strength. This study addresses this issue by combining non-destructive techniques with deep learning methods to predict the compressive strength of LWAC. We propose an explainable approach involving a convolutional recurrent neural network architecture, enhanced by unsupervised clustering and SHapley Additive exPlanations (SHAP), to improve interpretability. To optimize predictive performance, we evaluate aggregation strategies from the recurrent layer before passing to the dense layers, including configurations that apply full-sequence flattening, max pooling, average pooling, or an attention mechanism over the full sequence. Experimental results show that our model outperforms conventional machine learning methods such as multilayer perceptron (MLP), random forest (RF), support vector regression (SVR), as well as ensemble methods like gradient boosting (GBR), XGBoost, LightGBM, and weighted average ensemble (WAE). Furthermore, when combined with unsupervised clustering, the model identifies latent behavioral patterns that are not observable through traditional evaluation techniques. This shows the potential of integrating this tool with interpretable deep learning as a reliable non-destructive approach for the structural assessment of LWAC.

  • Open access
  • 6 Reads
MarineSumm: A Multi-Objective, Semantic-Aware Optimization Framework for Extractive Summarization of Legal Texts

Automatic text summarization is essential for managing information overload, particularly in domains such as law, where documents are lengthy and complex. Existing extractive summarization methods based on metaheuristic algorithms often suffer from poor initialization and single-objective optimization, resulting in redundant and semantically weak summaries. This work presents MarineSumm, a novel extractive summarization framework designed to improve both lexical quality and semantic relevance in long legal texts. MarineSumm enhances the Marine Predator Optimization (MPA) algorithm through three key contributions. First, it replaces random initialization with a PageRank-based strategy that uses sentence centrality to guide the starting population. Second, it incorporates SBERT embeddings to model semantic similarity between sentences more effectively. Third, it introduces a dynamic multi-objective fitness function that adaptively balances ROUGE scores and SBERT-based cosine similarity across iterations, optimizing for both surface-level relevance and deeper semantic alignment. To further refine candidate solutions, MarineSumm applies a controlled Lévy-flight mutation, sigmoid-based binarized encoding, and sentence-length constraints, which improve search diversity and ensure concise, coherent outputs. Evaluated on the BillSum dataset, MarineSumm achieves ROUGE-1 of 0.5348, ROUGE-2 of 0.2547, and ROUGE-L of 0.3344, outperforming standard metaheuristic baselines including Genetic Algorithm, Particle Swarm Optimization, and Ant Colony Optimization. These results demonstrate the effectiveness of integrating graph-based initialization, semantic-aware scoring, and adaptive optimization into the MPA framework. MarineSumm offers a robust, unsupervised solution for summarizing legal and technical documents and can serve as a reliable extractive component within hybrid summarization pipelines.

  • Open access
  • 12 Reads
Forecasting PM2.5 Concentrations with Machine Learning: Accuracy, Efficiency, and Public Health Implications

Nowadays air quality is a major issue, especially in large cities. Apart from air pollution, particulate matter (PM), especially PM2.5, poses serious health risks to individuals with respiratory conditions. Accurate forecasting of PM levels is crucial to warn vulnerable populations and reduce exposure. Machine learning models can effectively predict PM concentrations based on historical data and barometric conditions such as temperature and humidity. Such predictions can support timely public health interventions and environmental policy decisions. The selection of the optimal machine learning model for time series forecasting requires a careful balance between predictive accuracy and computational efficiency. This study evaluates a number of widely used models, such as Random Forest (RF), Long Short-Term Memory (LSTM), Convolutional Neural Network-LSTM (CNN-LSTM), and Extreme Gradient Boosting (XGB), in the context of time series forecasting for particulate matter (PM) concentrations.

Performance is assessed using three key error metrics: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Scaled Error (MASE). Additionally, the computational demands and development complexity of each model are analyzed.

The overall results are of great interest for each application model, and in more detail, it is shown that the best compromise between accuracy and efficiency can be achieved, while a corresponding prediction model with satisfactory predictive performance can be implemented.

  • Open access
  • 13 Reads
FusionX-Net: Cross-Attention Enhanced Masked Autoencoders for Multi-Modal Remote Sensing Data Fusion

Introduction:

Recent developments in self-supervised learning techniques for representation learning have gained considerable attention in the remote sensing sector, particularly for reducing the substantial costs related to annotating large satellite image datasets. In the context of multimodal data fusion, contrastive learning has become widely adopted to address domain discrepancies between various sensor types. However, contrastive methods heavily rely on data augmentation techniques, which require significant expertise, especially when dealing with multispectral remote sensing data. A promising yet often overlooked alternative is to employ masked image modeling-based pretraining techniques to bypass these challenges.

Methods:

In this research, we introduce FusionX-Net, a self-supervised learning framework that utilizes masked autoencoders and incorporates cross-attention mechanisms for the early and feature-level integration of synthetic aperture radar (SAR) and multispectral optical data. These two data modalities generally exhibit a significant domain gap, which complicates the fusion process. FusionX-Net effectively addresses this challenge by using its cross-attention design, improving the representation learning process.

Results:

FusionX-Net achieves state-of-the-art performance well above 95% on a number of benchmarks. On BigEarthNet-MM, FusionX-Net achieves 95.2% mean average precision (mAP), outperforming Dino-MM (88.1%) and SatViT (90.4%). Even in the low-label regime (1% labels), it achieves an impressive 92.0% mAP, leaving Dino-MM (81.5%) and SatViT (79.8%) by a wide margin. On SEN12MS, FusionX-Net achieves a Top-1 accuracy of 96.1%, significantly better than competitive baselines.

Conclusions:

The proposed approach offers an effective alternative to contrastive learning techniques, which typically require extensive data augmentation. It demonstrates the potential of self-supervised learning, particularly masked autoencoders, in solving challenges associated with multispectral data fusion.

Top