Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 2 Reads
AI-Driven Multi-Modal Integration for Structural Health Monitoring of Photovoltaic Systems: A Nigerian Context
,

Introduction

Structural Health Monitoring (SHM) is essential for maintaining the reliability and long-term performance of photovoltaic (PV) systems, particularly in regions exposed to harsh environmental conditions. Photovoltaic installations in Nigeria operate under high solar irradiance, elevated ambient temperatures, dust accumulation, and limited maintenance accessibility, which accelerate structural degradation such as cell cracking, delamination, thermal hotspots, and shading losses. Existing SHM research is largely dominated by laboratory-based electroluminescence (EL) inspection and single-metric performance evaluation, limiting scalability and field applicability. To address these limitations, this study proposes a structured AI-driven multi-modal SHM evaluation framework that integrates unmanned aerial vehicle (UAV) imaging, thermal and RGB sensing, and deep learning diagnostics. Unlike experimental studies focused on single datasets, this work develops a systematic analytical framework that synthesizes and evaluates published photovoltaic diagnostic pipelines using objective multi-criteria decision analysis, enabling consistent cross-study comparison under field-relevant deployment conditions.

Methods

A literature-supported analytical framework was developed using Multi-Criteria Decision Theory (MCDT) to evaluate competing AI–hardware SHM pipelines. Eight representative AI–hardware alternatives were selected from peer-reviewed photovoltaic diagnostic studies using strict inclusion thresholds (≥90% Accuracy/F1/AUC or low regression error metrics) and verified field applicability. Performance indicators included Accuracy, F1-score, Area Under the Curve (AUC), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and coefficient of determination (R²). To ensure methodological consistency across heterogeneous studies, performance metrics were normalized and objectively weighted using the CRITIC (Criteria Importance Through Intercriteria Correlation) method. Robustness-sensitive criteria received the highest weights (AUC = 0.2782; RMSE = 0.1978; MAE = 0.1835; R² = 0.1844), while Accuracy received a minimal weight (0.0071) due to uniformly high reporting across studies. The weighted decision matrix was subsequently ranked using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to identify field-optimal SHM pipelines.

Results

TOPSIS ranking identified CNN combined with thermal imaging as the most effective SHM solution (Closeness Coefficient Ci = 0.567), followed by CNN + UAV (RGB/Thermal) (Ci = 0.505) and Hybrid CNN + Sensors/IoT + UAV systems (Ci = 0.497). CNN-thermal pipelines consistently demonstrated high diagnostic performance (≈95–96% accuracy) and strong AUC values (~0.96), while maintaining operational scalability for large photovoltaic farms. UAV-enabled CNN frameworks further improved inspection coverage by enabling non-contact, large-scale monitoring of distributed PV arrays. In contrast, EL-based CNN methods ranked lower (Ci = 0.370) due to laboratory constraints, while classical machine-learning approaches such as SVM and ensemble tree models exhibited weaker generalization and incomplete metric reporting (Ci = 0.032–0.258). Hierarchical clustering analysis further confirmed that UAV-thermal CNN pipelines form a high-performance family of SHM solutions optimized for real-world deployment environments.

Conclusion

This study provides a methodologically consistent, literature-derived multi-criteria evaluation framework for AI-based photovoltaic structural health monitoring pipelines. The results indicate that UAV-mounted thermal CNN workflows offer the most robust and scalable SHM solution for photovoltaic installations operating under Nigerian environmental conditions. By integrating multi-modal sensing with objective decision analysis, the proposed framework enables systematic selection of field-deployable PV diagnostic technologies and bridges the gap between laboratory-based research and practical solar farm monitoring in sub-Saharan Africa. The framework also establishes a foundation for future experimental validation using field-acquired UAV and thermal datasets.

  • Open access
  • 17 Reads
Comparative Evaluation of YOLOv8n and YOLOv26n for Edge-Optimized Defect Detection in PV Electroluminescence Imaging

Photovoltaic systems play a central role in the worldwide transition to renewable energy sources, yet their efficiency and safety are frequently compromised by defects introduced during manufacturing or arising during their operation. Irregularities ranging from structural discontinuities to electrical anomalies can lead to considerable power losses and the development of localized hotspots. Electroluminescence (EL) imaging offers high-resolution, non-invasive visualization of cell-level defects, but manual inspection remains labor-intensive, subjective, and unscalable for large solar farms or remote installations. Recent advances in lightweight deep learning models, particularly the You Only Look Once (YOLO) family, enable automated, real-time detection. Nevertheless, achieving both strong detection performance and minimal computational demand on resource-constrained edge devices remains difficult, particularly when working with imbalanced, long-tailed distributions such as those found in the PVEL-AD dataset.

This feasibility study assesses two compact variants from the YOLO series, YOLOv8n, which serves as the established reference, and YOLOv26n, the most recent edge-focused model featuring a native end-to-end design without non-maximum suppression (NMS). Both models are trained and evaluated using a representative subset of the PVEL-AD dataset, which comprise 4500 electroluminescence (EL) images (3600 for training and 900 for validation) containing 7799 annotated defect instances distributed across eight key defect types: black_core, crack, finger, thick_line, star_crack, horizontal_dislocation, vertical_dislocation, and short_circuit. The dataset displays a pronounced long-tail distribution, where finger and crack defects constitute more than half of all instances. Training was performed with conservative settings to simulate resource-constrained experimentation: 512×512 input resolution, 50 epochs, early stopping (patience=10), minimal augmentation. And the models were trained and inferenced on a Tesla P100 GPU using the Ultralytics framework.

The evaluation adopts standard object detection metrics: mean Average Precision at IoU thresholds of 0.5 (mAP@0.5) and across 0.5 to 0.95 (mAP@0.5:0.95), along with precision, recall, and inference latency (encompassing preprocessing, inference, and postprocessing time in milliseconds per image, as well as frames per second). All measurements are derived from the 900-image validation subset, which includes 1586 defect instances.

On this validation set, YOLOv26n attains an overall mAP@0.5 of 0.897 and mAP@0.5:0.95 of 0.654, accompanied by a precision of 0.832 and recall of 0.891. In comparison, YOLOv8n yields modestly higher accuracy, with mAP@0.5 reaching 0.926, mAP@0.5:0.95 at 0.671, precision at 0.915, and recall at 0.889. Both architectures demonstrate excellent performance on larger or more structurally distinct defect categories (AP@0.5 exceeding 0.96 for black_core, short_circuit, and both dislocation types). Detection remains more moderate, however, on particularly difficult classes such as crack (0.687–0.737) and star_crack (0.768–0.771).

Inference speed markedly favors YOLOv26n, which records an average total latency of approximately 2.5 ms per image (corresponding to roughly 397 FPS), in contrast to approximately 3.3 ms (~302 FPS) for YOLOv8n. This advantage stems largely from the model's NMS-free architecture, which eliminates the computational overhead associated with traditional post-processing steps.

This preliminary evaluation demonstrates that lightweight YOLO models, particularly YOLOv26n, offer a compelling trade-off between detection performance and inference speed for PV defect monitoring using EL imagery. Despite training on a subset with minimal augmentation, the models achieve competitive mAP@0.5 values (0.897–0.926) and sub-10 ms latency, making them suitable for real-time deployment on edge devices in industrial or remote solar environments. The strong performance on dominant defect types and reasonable results on rarer classes highlight their practical potential. Future work will extend this study to the full PVEL-AD dataset, incorporate additional architectures, stronger augmentation, and quantization for ultra-low-power settings.

  • Open access
  • 4 Reads
Machine Learning-Based Optimization of Energy Consumption in Ion-Exchange Wastewater Treatment Systems

Improving the energy efficiency of industrial wastewater treatment processes is a critical challenge driven by increasing operational costs and sustainability requirements. In ion-exchange-based wastewater treatment systems, the specific final energy use is primarily associated with pumping operations and hydraulic losses, which are strongly influenced by flow regulation strategies. Conventional control approaches typically operate under fixed or conservatively selected flow conditions to ensure treatment quality, often leading to excessive final energy use, especially under varying influent water quality. This study investigates the application of machine learning (ML) methods to support energy-efficient operation of an ion-exchange wastewater treatment system by identifying adaptive flow control strategies that balance treatment performance and energy use. A laboratory-scale experimental setup was developed, and a dataset of 300 operating samples was collected under varying conditions of water hardness, total dissolved solids (TDSs), and valve opening degree. The specific final energy use (kWh/m³), estimated from hydraulic operating parameters, was used as a key performance indicator. The dataset was divided into training (80%) and testing (20%) subsets, and model performance was evaluated using R², RMSE, and MAE metrics. Extreme Gradient Boosting (XGBoost) was employed as the primary predictive model due to its robustness in handling nonlinear relationships and small-to-medium datasets, while Random Forest (RF) was used as a baseline for comparison. Hyperparameters of both models were tuned using cross-validation to improve generalization performance. The results demonstrate that ML models can accurately approximate the nonlinear relationship between water quality parameters, control actions, and energy use. XGBoost achieved higher predictive accuracy and stability compared to RF. Model-based analysis identified operating regions where treatment requirements are satisfied with reduced final energy use. Under representative operating scenarios, the proposed ML-assisted control strategy indicates a potential reduction of specific final energy use by approximately 10–15% without compromising treatment performance. These findings confirm the feasibility of integrating machine learning as a decision-support tool for energy-aware control in ion-exchange wastewater treatment systems and provide a basis for future implementation of real-time intelligent control frameworks.

  • Open access
  • 5 Reads
Design, Fabrication, and Experimental Validation of a Truncated Curved-Blade Rotor Based on Fibonacci Spiral Geometry for Vertical-Axis Wind Turbines
, , , , ,

This study presents the design, fabrication, and experimental validation of a vertical-axis wind turbine (VAWT) rotor with truncated curved blades, whose geometry is rigorously based on the Fibonacci spiral and the aspect ratio defined by the golden ratio. The proposed configuration aims to enhance aerodynamic performance and energy conversion efficiency for small-scale wind energy applications, particularly in low to moderate wind speed regimes. The rotor was conceived as an alternative solution for decentralized generation systems and hybrid renewable energy configurations.

The aerodynamic behavior and operational performance of the proposed rotor were evaluated through a comprehensive experimental campaign that included controlled wind tunnel testing at laboratory scale and validation under real operating conditions. To systematically analyze the influence of design and operational parameters, a full factorial experimental design N=3KN = 3^KN=3K was implemented, considering two primary input factors (K = 2): the number of blades (NA = 2, 4, and 6) and the wind speed (Vv) at three levels, namely 6.4 m/s (low), 7.6 m/s (medium), and 8.8 m/s (high). The experimental setup was conducted in a wind tunnel with a controllable velocity range from 2.5 to 10 m/s, allowing precise regulation of the flow conditions.

The response variables analyzed included the available wind power (Pe), rotor mechanical power (Pr), tip-speed ratio (TSR), power coefficient (Cp), and rotational speed (rpm). Based on the experimental data, empirical mathematical models were developed to quantify the effect of the selected factors on power generation, aerodynamic efficiency, rotational dynamics, and TSR behavior. These models enabled a detailed assessment of performance trends and interactions between blade number and wind speed.

The experimental results reveal power coefficient values ranging from 0.55 to 0.57, indicating a high level of aerodynamic efficiency for a vertical-axis configuration. The available wind power varied between 0.588 and 1.586 kW, while the rotor mechanical power ranged from 0.344 to 0.887 kW. The TSR values were found between 0.64 and 0.85, and the rotational speeds varied from 155.6 to 288.62 rpm, depending on the operating conditions and blade configuration.

Overall, the obtained results demonstrate that the proposed Fibonacci-based rotor exhibits robust aerodynamic performance and stable operational behavior, positioning it as a viable and efficient alternative for mini wind energy systems and hybrid mini wind–solar generation schemes. The integration of Fibonacci spiral geometry and golden ratio proportions provides a promising design pathway for improving the performance of vertical-axis wind turbines in distributed renewable energy applications.

  • Open access
  • 5 Reads
Thermodynamic-based perspectives on critical raw materials in photovoltaic energy production: the Italian case

Introduction Global decarbonization efforts have accelerated the deployment of renewable infrastructures, particularly photovoltaic (PV) systems, due to their scalability and cost-efficiency. However, this transition increases demand for Critical Raw Materials (CRMs), which often involve energy-intensive extraction and refining processes with high greenhouse gas emissions [1]. While traditional assessments focus on material demand, supply risks, and economic security, they frequently overlook the physical quality of resources and the thermodynamic irreversibility of their transformation. Specifically, high-purity refining entails significant exergy losses, indicating material degradation. This study explores existing methodological frameworks to discuss how a thermodynamic perspective could offer a physics-based, complementary metric to socio-economic and environmental analyses, better characterising the long-term sustainability of material-intensive technologies.

Methods This study integrates a quantitative material demand assessment for PV technologies with an exploratory analysis of thermodynamic and thermoeconomic methods addressing resource quality. Using the Italian National Energy and Climate Plan (PNIEC) as a 2030 reference scenario, we estimate cumulative requirements for PV systems, specifically silicon, silver, copper, aluminium, steel, and concrete, and we use their potential environmental impacts are associated using Carbon Footprint (CF) ranges from the JRC report [2]. In this context, it can be useful to complement these quantities with an overview of exergy-based and thermoeconomic approaches and metrics [3, 4] to explicitly address resource quality. The analysis focuses on the conceptual relevance of these physical indicators and their capacity to inform sustainability assessments for material-intensive technologies.

Results and Discussions PNIEC-based assessments confirm that large-scale PV deployment requires substantial bulk and highly refined materials. Projections on future energy demand can be used to calculate the future CO2eq emissions linked to PV technologies (or their CF). While this metric clarifies climate impacts, it offers a primarily environmental perspective. Integrating thermodynamic frameworks can enhance resource characterization by addressing physical aspects that emission-based indicators overlook, such as resource quality and degradation. Frameworks such as cumulative exergy demand, exergy replacement costs, exergy footprint, and thermodynamic rarity explicitly account for irreversibility and resource quality degradation, e.g., the thermodynamic rarity (exTR) accounts for the “amount of exergy resources needed to obtain a mineral commodity from an accessible common rock, using the best prevailing technology” with respect to a reference state, in which the minerals are too dispersed in the crust. This quantity includes a physical cost, meaning the exergy resources needed to convert a mineral into a commodity (embodied exergy, or exergy cost), and a non-visible cost, which accounts for the “natural” cost (exergy replacement cost) of the actual concentrations of ores in mines instead of being dispersed in the crust. It takes into account both the scarcity in the crust together with ore degradation, but also the technology improvements that can be introduced. As concerns the case study, PNIEC PV-related cumulative 2020-2030 demand has been evaluated as: 147-240 kt of PV-grade silicon, 450 kt of aluminium, 276 kt for copper, and 600 t of silver. CF values of PV systems range from 10.8 to 44.0 gCO2eq/kWh, driven by silicon manufacturing electricity mixes, material intensity, and structural components. Technology lifetime further influences emission intensity per unit of energy. Meanwhile, the thermodynamic rarity values, exTR [GJ kg-1], of the main elements used in PV technologies can vary a lot: Silicon (77.0), Aluminium (681.7), Copper (348.7), Silver (8937.6). This allows to obtain a complementary thermodynamic quantity, related to the second law and to the degradation of resources, that can be further used to assess the resources' sustainability. Consequently, these approaches can offer a thermodynamic foundation for assessing the long-term sustainability of material-intensive technologies, although such methodologies are rarely applied in CRM-oriented policy analyses or systemic evaluations of green technologies.

Conclusions While material demand and CF indicators are essential, they overlook the physical implications of resource transformation. Thermodynamic approaches complement these metrics by addressing resource quality and irreversibility, fostering a systemic sustainability evaluation of material-intensive technologies.

References

[1] International Energy Agency. Global Critical Minerals Outlook 2024. Paris: IEA; 2024.

[2] European Commission, JRC. Harmonised rules for the calculation of the CF of PV modules in the context of the EU Ecodesign Directive. Luxembourg: Publications Office of the European Union; 2025.

[3] Szargut J. Exergy Method: Technical and Ecological Applications. Southampton: WIT Press; 2005.

[4] Sciubba, E. A Thermodynamic Measure of Sustainability. Frontiers in Sustainability 2021, 2, 739395. DOI: 10.3389/frsus.2021.739395

[5] Iglesias-Émbil et al. Raw material use in a battery electric car - a thermodynamic rarity assessment. Resources, Conservation & Recycling 2020, 158, 104820. DOI: 10.1016/j.resconrec.2020.104820

  • Open access
  • 8 Reads
Distributed Generation in Mexico’s Energy Transition: Regulatory Change, Institutional Tensions and Policy Implications

Introduction

Mexico's electricity sector has undergone significant regulatory shifts in recent months. Amendments to the Electricity Sector Law, the publication of its implementing regulations, the update of the Ministry of Energy's Sectoral Plan (SENER), and the release of the country's third Nationally Determined Contributions (NDC 3.0) have together reshaped the incentives, boundaries, and constraints governing the integration of clean technologies into the national grid.

Within this evolving landscape, distributed generation (DG) has gained strategic relevance for Mexico's energy transition. By enabling the decentralization of electricity production, supporting the incorporation of renewable sources, and broadening community participation, DG holds the potential to strengthen energy democracy (Moroni, 2024). It also offers a pathway to simultaneously advance decarbonization, energy security, and capacity expansion while reducing transmission losses and diversifying the energy mix.

This paper aims to assess the internal coherence of the new regulatory framework and to identify the structural factors that either enable or constrain the expansion of DG. In doing so, it seeks to provide analytical inputs that can inform the design of public policies more consistent with the country's energy transition objectives.

Methods

The study draws on a qualitative analysis of Mexico's updated energy regulatory framework, a review of specialized technical reports, and semi-structured interviews with key stakeholders across the national energy ecosystem. Findings are organized through the Multi-Level Perspective proposed by Geels (2024), which distinguishes between macro, meso, and micro dynamics of sociotechnical transitions.

The research systematically examines recent amendments to the Electricity Sector Law, its corresponding regulations, the Energy Sectoral Plan, and the NDC 3.0, with particular attention paid to provisions bearing on distributed generation, renewable energy integration, and grid modernization. A public policy analysis lens is applied to identify areas of regulatory coherence, existing gaps, and implementation mechanisms. The study is further grounded in recent data on installed DG capacity and market trends, alongside specialized literature on energy transitions and regulatory governance.

Results

The analysis reveals tensions across all three levels of the Multi-Level Perspective. At the meso level, a central finding is the limited articulation between the new electricity regulatory framework and the NDC 3.0. Persistent gaps are evident in institutional coordination, access to financing, and technical support, alongside an uneven distribution of implementation capacities across the country. These tensions are illustrated by the marked geographic and technological concentration of DG activity: 37% of interconnection requests are concentrated in just two states—Jalisco and Nuevo León—and 96% correspond to photovoltaic systems.

At the macro level, national energy policy has prioritized energy sovereignty through the consolidation of state-owned enterprises, particularly Petróleos Mexicanos (PEMEX) and the Comisión Federal de Electricidad (CFE). The recently launched Plan México further reinforces this orientation by designating petrochemicals—a gas-intensive industry reliant on hydraulic fracturing—as a strategic productive sector.

At the micro level, existing instruments tend to focus on residential and community-scale DG, primarily within rural electrification programs such as those operating in Baja California Sur. This suggests a policy logic oriented toward expanding energy access rather than driving systemic scaling. The situation is further complicated by inadequate household infrastructure in low-income regions, particularly across the northeast, south, and southeast of the country, which limits the viability of DG projects in areas where they may be most needed.

Conclusions

Distributed generation occupies a relevant, if still constrained, position within Mexico's emerging energy framework, with genuine potential to contribute to matrix diversification and system resilience. Yet the findings point to a fragmented development trajectory, one shaped more by a logic of containment than by structural promotion. Advancing DG's role in the energy transition will require stronger alignment between climate commitments and energy policy, improved institutional coordination, and the design of mechanisms capable of scaling DG in an inclusive, equitable, and territorially balanced manner.

  • Open access
  • 9 Reads
Global Energy Security at Risk: Maritime Piracy and Supply Chain Disruptions in the Context of Middle East Conflict (2026)

The contemporary global energy system relies heavily on maritime transportation for the delivery of critical energy resources, including crude oil, liquefied natural gas (LNG), and refined petroleum products. Approximately 80–90% of global trade is transported by sea, while around 20% of the world’s oil supply transits through key maritime chokepoints such as the Strait of Hormuz. As a result, the security and reliability of major maritime routes play a fundamental role in ensuring the continuity of energy supply and the stability of energy markets worldwide. While technological, environmental, and geopolitical risks affecting energy systems are widely discussed, maritime piracy remains an underexamined yet persistent non-technical threat to global energy security.

The primary objective of this study is to assess the impact of maritime piracy on energy security through disruptions in global maritime energy supply chains. The analysis adopts a global perspective while using the Middle East as a case study, particularly in the context of heightened geopolitical tensions in 2026 affecting strategic routes such as the Strait of Hormuz and Bab el-Mandeb. The study is based on piracy incident data (e.g., International Maritime Bureau reports), AIS-derived maritime traffic data, and spatial analysis of key energy shipping routes.

The study highlights that the consequences of maritime piracy extend well beyond direct material losses or immediate threats to vessel crews. Piracy incidents frequently result in delays in energy deliveries, forced rerouting of vessels, increased fuel consumption, and higher greenhouse gas emissions due to longer shipping distances. Rerouting tankers around the Cape of Good Hope instead of transiting through high-risk areas can significantly increase voyage distance and fuel costs. Moreover, heightened piracy risk leads to increased insurance premiums, higher freight rates, and additional security expenditures, which may raise shipping costs by several to over a dozen percent in high-risk regions.

These costs are ultimately transferred to energy markets, influencing energy prices and increasing volatility, particularly in import-dependent regions. In the context of the 2026 Middle East tensions, additional disruptions and perceived risks in critical maritime corridors further amplify these effects, contributing to concerns over fuel availability and market stability.

In addition, the paper discusses the implications of maritime piracy for energy policy and long-term investment decisions. Persistent security risks in maritime transport corridors may discourage investment in energy infrastructure, alter trade patterns, and undermine efforts to build resilient and sustainable energy systems. The role of modern technological solutions, including satellite surveillance, Automatic Identification System (AIS) data analysis, and artificial intelligence-based risk assessment tools, is examined as a means of mitigating piracy-related risks and enhancing the resilience of maritime energy supply chains.

The findings of this study indicate that maritime piracy remains a significant and often underestimated threat to global energy security. The paper contributes to the literature by integrating maritime security risks into energy economics and energy security analyses, which have traditionally focused on technical and geopolitical factors. Addressing piracy through coordinated international security measures, technological innovation, and integrated energy and maritime policies may improve the stability and resilience of global energy supply systems.

  • Open access
  • 11 Reads
Mapping Energy Governance in the Global Energy Transition: An Evidence-Based Topic Modeling Approach

The global energy transition is reshaping energy systems through profound technological, economic, and institutional transformations, intensifying the need for coherent governance frameworks capable of aligning energy sources, market mechanisms, and long-term policy objectives. While energy governance has gained increasing scholarly attention, the literature remains conceptually fragmented, making it difficult to extract systematic, evidence-based insights relevant to energy economics and policy. This study provides a structured and quantitative mapping of global research on energy governance within the energy transition context.

The analysis integrates bibliometric techniques with Latent Dirichlet Allocation (LDA) topic modeling applied to a curated corpus of 312 peer-reviewed journal articles indexed in Scopus and Web of Science between 2014 and 2025. Although focused in scope, the dataset was deliberately constructed using strict inclusion criteria to capture publications explicitly addressing governance dimensions within energy transition debates, thereby ensuring thematic coherence rather than broad bibliometric coverage. Bibliometric indicators reveal an average annual growth rate exceeding 20% in governance-related transition research over the past decade, with a clear post-2018 acceleration. LDA modeling (optimal solution: 10 topics, coherence score = 0.51) identifies dominant thematic clusters centered on energy policy frameworks (22% topic prevalence), renewable energy governance and deployment mechanisms (18%), market regulation and investment dynamics (15%), and multi-level institutional coordination (13%). Emerging but rapidly growing themes include energy justice and participation (9%) and distributional impacts of transition policies (7%).

Temporal trend analysis shows a statistically significant increase (p < 0.05) in governance- and policy-oriented topics after 2019, alongside a relative decline in purely technology-centric discussions. Citation network metrics indicate increasing cross-referencing between governance, market design, and renewable energy deployment studies, suggesting the progressive integration of technical and economic policy dimensions. The findings also highlight persistent tensions between fossil fuel governance regimes and renewable energy policy pathways, particularly regarding regulatory stability, investment risk allocation, and market signaling mechanisms.

While bibliometric and LDA approaches are widely used in energy research, the contribution of this study lies in its focused and quantitative examination of governance as the mediating layer between energy sources and economic policy instruments in the transition process. By providing explicit topic prevalence measures, temporal dynamics, and network-based evidence, this study strengthens the empirical basis for governance-centered energy policy analysis.

The results underscore the growing centrality of governance in shaping energy economics outcomes and support the development of integrated, evidence-informed policy frameworks that enhance coherence between regulatory instruments, market structures, and decarbonization objectives. This work contributes to advancing systematic, data-driven approaches to understanding governance dynamics in global energy transitions and provides a replicable analytical framework for future research in energy economics and policy.

This research was made possible thanks to the financial support of the Agencia Nacional de Hidrocarburos (ANH), through its Vicepresidencia Técnica, within the framework of Contract No. 618 of 2025 executed between the ANH and the Universidad del Magdalena. The authors gratefully acknowledge this institutional support, which enabled the development of the analyses presented in this study and contributed to strengthening evidence-based research on energy governance and the energy transition.

  • Open access
  • 9 Reads
Analysis of the Free Energy Market Opening for Low-Voltage Consumers in Brazil: A Critical Approach and International Comparative Study

Introduction: The Brazilian electricity sector has undergone a gradual liberalization process since 1995, successfully granting freedom of choice to high-voltage consumers. However, the vast majority of the consumer base, residential users and small businesses, remain captive to local distributors, prevented from negotiating directly with suppliers. Despite recent regulatory moves, such as the proposal for full market opening by 2026, there is a significant gap in research on the socioeconomic risks of this transition. Specifically, there is little evidence on how to balance increased competition with the need for social equity and the financial sustainability of distributors. Therefore, this study aims to assess the prospect of full market opening in Brazil, drawing lessons from international benchmarks and modeling the potential economic impacts on the national regulated market.

Methods: The research methodology begins with a systematic literature review in global databases to identify success and failure factors in the international liberalization of low-voltage electricity. This review provides the basis for a critical contextualization of the Brazilian scenario, comparing the historical constraints of the captive market with the proposed liberalization goals. For the quantitative phase, the study replicates established forecasting protocols, applying models to Brazilian historical data (2004–2024) to project demand and costs until 2035.

Results: Preliminary analysis of international experiences reveals several structural challenges: Germany exhibits significant market rigidity due to high switching costs and consumer inertia; France maintains price stability through state redistribution of nuclear energy profits; Japan recently experienced a "reverse phenomenon," in which market prices exceeded regulated prices after the 2022 energy crisis, leading to a contraction in the free market share; and the United Kingdom demonstrates that high consumer satisfaction (81%) can coexist with record levels of household energy debt (£3.85 billion). In the Brazilian context, initial simulations indicate that regulated tariffs may increase between 2% and 11% during the migration phase. This pressure is disproportionately high for smaller distributors, where loss of scale threatens solvency and may force the remaining captive consumers, including 67% of low-income users, to absorb the fixed costs of the distribution network.

Conclusions: The study concludes that while liberalization can boost efficiency and innovation, its successful implementation in Brazil depends on robust regulatory safeguards. To avoid deepening social inequalities, the transition should include a gradual management of legacy contracts, investment in smart metering infrastructure, and comprehensive energy education programs to protect vulnerable users. Furthermore, the research suggests that an abrupt and sudden opening could be detrimental to the financial health of regional utilities. Instead, a gradual approach is recommended, ensuring that gains in competitiveness are not achieved at the expense of tariff affordability for those who remain in the regulated market.

  • Open access
  • 6 Reads
Techno-Economic Comparison of Carbon Capture Technologies with Exhaust Gas Recirculation in NGCC Power Plants

Natural gas combined cycle (NGCC) power plants are expected to remain a critical component of global electricity generation due to their high efficiency, operational flexibility, and comparatively lower specific carbon dioxide (CO₂) emissions than coal-fired power plants. However, achieving long-term climate targets requires deep decarbonization of gas-fired generation through the integration of carbon capture, storage, and utilization (CCSU) technologies. Post-combustion amine absorption is currently the most mature capture technology, while membrane separation represents a promising emerging alternative with potential advantages in modularity and operational simplicity. In parallel, process intensification strategies such as exhaust gas recirculation (EGR) have recently attracted growing attention due to their ability to increase CO₂ concentration in flue gas and reduce the energy penalty associated with capture processes. Despite significant progress, comprehensive techno-economic comparisons of absorption, membrane, and hybrid capture systems under EGR-integrated NGCC configurations remain limited.

This study presents an integrated techno-economic assessment of multiple CO₂ capture configurations applied to a 450 MW NGCC power plant. Detailed process simulations were developed using Aspen Plus and Aspen Custom Modeler to evaluate absorption-based, membrane-based, and hybrid capture systems combined with selective and non-selective EGR strategies. The modelling framework includes full process integration with the steam cycle, enabling consistent evaluation of energy consumption, efficiency penalties, and key economic indicators. The assessment focuses on net plant efficiency, energy penalty, levelized cost of electricity (LCOE), and CO₂ avoidance cost, providing a consistent basis for comparing capture options.

Results indicate that integrating EGR substantially enhances the performance of carbon capture in NGCC systems. In particular, selective EGR combined with amine absorption demonstrates the most favourable performance, reducing the energy penalty by more than 30% compared with standalone absorption and by over 70% relative to membrane separation. The selective EGR–absorption configuration achieves an estimated LCOE of approximately 72 USD/MWh and a CO₂ avoidance cost of about 39 USD/tCO₂, outperforming the selective EGR–membrane configuration, which yields approximately 77 USD/MWh and 51 USD/tCO₂. These results highlight the strong influence of flue gas composition and process integration on the overall economic viability of carbon capture.

The findings demonstrate that combining process intensification with established capture technologies can significantly improve the feasibility of CCSU deployment in NGCC power plants. This work provides insights into cost-effective decarbonization pathways for gas-fired power generation and supports ongoing efforts toward large-scale implementation of carbon capture technologies in the transition to low-carbon energy systems.

Top