Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 14 Reads
Green valorisation of waste vegetable oil into thermo-responsive nanocomposites for enhanced oil recovery

Introduction:

Although polymer flooding is an efficient chemical enhanced oil recovery (CEOR) method, the poor thermal and salinity stability of commonly used partially hydrolysed polyacrylamide (HPAM) hinders its implementation in reservoirs with extremely high-temperature and high-salinity (HTHS) conditions. Hence, the synthesis of environmentally sustainable polymers able to maintain stability under harsh reservoir conditions is of significant research interest for enhanced oil recovery (EOR) applications.

Methods:

In this research, oleic acid-enriched waste vegetable oil (WVO) was valorised to synthesise a green, thermo-responsive nanocomposite composed of very-long-chain fatty acid esters for EOR applications under harsh HTHS reservoir conditions. A sustainable transesterification approach was first applied to produce a fatty acid-based thermo-responsive oleic acrylate macromonomer from WVO. The obtained green macromonomer was subsequently copolymerised with acrylamide, 2-acrylamido-2-methylpropane sulfonic acid, and dimethylphenylvinylsilane via emulsion polymerisation. The structure, morphology and compositional properties of the synthesised nanocomposite were extensively characterised by FTIR, 1H NMR, TEM, SEM and EDX. Furthermore, the thermal behaviour of the synthesised nanocomposite was assessed using TGA and DTA techniques.

Results:

The results demonstrated clear temperature-responsive thickening behaviour at a minimal polymer content of 0.04 wt.%, with viscosity increasing as the temperature rose from 25 to 110 °C and salinity increased from 1,000 to 250,000 ppm, including deionised water systems. This thermo-responsive thickening behaviour demonstrates enhanced mobility control and improved sweep efficiency during polymer flooding. Flooding experiments indicated that the acrylated oleate-g-terpolymer/silica nanocomposite is a promising polymer flooding candidate and achieved recovery factors of 15.4 % and 21.2 % at concentrations of 0.04 wt.% and 0.05 wt.%, respectively, assessed under harsh conditions of 100 ºC and a salinity of 250,000 ppm. Moreover, the synthesised nanocomposite demonstrated significant resistance factor (Rf) values of 5.9 and 8.1 using nanocomposite concentrations of 0.04 wt.% and 0.05 wt.%, respectively. The nanocomposite altered sandstone wettability from oil-wet to water-wet, which improved the oil recovery.

Conclusions:

These results suggest that waste vegetable oil-derived thermo-responsive nanocomposites offer a sustainable, high-performance polymer flooding solution capable of delivering significant oil recovery under ultra-high-temperature and high-salinity reservoir conditions at remarkably low polymer concentrations. The findings highlight the strong potential of these green, thermo-responsive nanocomposites as promising candidates for large-scale EOR applications in HTHS oil reservoirs worldwide.

  • Open access
  • 11 Reads
Applications of RNN, LSTM, and GRU in Solar Irradiance Prediction for Photovoltaic Systems

Introduction

The rigorous analysis of solar radiation data is fundamental to the advancement of renewable technologies and climate studies, particularly in Natal, Brazil, a strategic location characterized by an annual insolation of approximately 2,968.4 hours and an average daily irradiance of 5.0 kWh/m². This research aims to address a critical gap in the literature often limited by a scarcity of validated models for the high volatility of tropical regions and an overreliance on univariate approaches that neglect external environmental variables by investigating the application of Deep Learning architectures, such as Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). By modeling complex temporal dependencies and dynamic patterns in multivariate series, this study seeks to validate the efficacy of these configurations in optimizing energy resource forecasting and enhancing grid reliability.

Methodology

The methodological framework commenced with comprehensive data pre-processing, employing spline interpolation techniques to handle missing values and z-score standardization for data normalization. To capture the temporal dynamics of the series, the data were segmented into sliding windows of 10 samples each, serving as input for the neural networks. The dataset was partitioned into training (90%) and testing (10%) sets, ensuring the preservation of sequential integrity for robust validation. The implementation was executed in Python using the TensorFlow and Keras frameworks. The evaluated architecture consisted of a four-layer deep structure (256, 128, 64, and 1 neuron in the output layer), utilizing ReLU activation for hidden layers and linear activation for the output. Hyperparameter optimization was conducted by systematically varying batch sizes (16, 32, and 64) and training durations (50 and 100 epochs) to identify the most efficient model configurations.

Results

In terms of practical application, the selection among these Deep Learning architectures depends on the specific energy system requirements: RNNs are ideal for rapid processing within short time windows; LSTMs are essential for systems requiring high reliability in long-term and seasonal forecasting, as their 'memory gates' prevent the loss of historical information; meanwhile, GRUs represent the most balanced solution for implementation in microcontrollers and low-cost embedded systems, offering the robustness of LSTMs with significantly lower memory and processing demands. Although all architectures converged toward a coefficient of determination (R2) of 0.94, the practical distinction between the models lies in the trade-off between statistical precision and structural complexity. The RNN 3 model achieved the lowest absolute errors, establishing itself as the most accurate configuration for this specific dataset. In contrast, the LSTM and GRU architectures demonstrated greater robustness in handling long-term temporal dependencies, effectively mitigating the vanishing gradient problem. Notably, GRU 3 offered performance equivalent to LSTM 3, but with a more simplified and efficient computational structure.

The error metrics for the top-performing configurations are summarized as follows:

  • RNN 3: RMSE of 291.92; MAE of 208.47; R2 of 0.94.
  • GRU 3: RMSE of 295.69; MAE of 211.90; R2 of 0.94.
  • LTM 3: RMSE of 296.14; MAE of 213.44; R2 of 0.94.

Conclusions

This study concludes that the LSTM model demonstrated the greatest robustness in solar radiation forecasting, exhibiting a high correlation with empirical data and superior stability in predicting peak events and seasonal patterns. The primary contribution of this work lies in the use of an unprecedented regional dataset, allowing for a rigorous performance evaluation of these architectures under the specific geoclimatic conditions of tropical regions. Notably, the LSTM architecture outperformed the conventional RNN by maintaining lower sensitivity to stochastic noise and modeling complex nonlinear relationships with higher precision, even in the presence of high local thermal and irradiance volatility. These findings validate the implementation of advanced recurrent architectures as a robust and contextualized tool for the sustainable management of solar energy systems in tropical environments.

  • Open access
  • 7 Reads
Integrating Electro-Thermal Dynamics into SOC Estimation through Internal Resistance and Thermal Loss Regularization

The accurate estimation of the State of Charge in lithium-ion batteries represents a fundamental task for modern energy management systems, yet it remains a complex challenge due to latent electro-thermal dynamics that cannot be directly observed through simple terminal measurements. In this study, we present a robust and physically consistent framework that utilizes internal resistance as an intermediate variable to stabilize the prediction of the State of Charge.
The work begins with an in-depth predictive power analysis conducted on an experimental dataset. The primary objective was to evaluate the observability of the effective internal resistance based on current, voltage, and temperature statistics. The results of this preliminary analysis revealed a significant disparity in the information provided: while the mean values of current and temperature show limited correlation when considered individually, the statistical features derived from voltage, including variance and quartile distributions, carry the dominant predictive information. Furthermore, we demonstrated that the calculation of the instantaneous ratio between voltage and current variations acts as an essential physical descriptor capable of linking raw data to the electrochemical state of the cell.
Based on these findings, an extremely lightweight resistance estimator was trained, achieving a very low relative error. This component is then coupled with a lumped thermal model. The true innovation of this approach lies in the way the thermal model is utilized: instead of incorporating it as additional input data, which would increase computational complexity during real-time use, it is introduced as a regularization term within the loss function during the training phase. This method imposes electro-thermal consistency by penalizing State of Charge trajectories that deviate from the heat generation profiles predicted by the physics of Joule heating. Experimental results demonstrate that this strategy significantly improves the robustness and stability of the system under various operating conditions, while ensuring zero additional computational load for the on-board hardware during final execution.

  • Open access
  • 5 Reads
Data-Driven Prediction of DC Current in an Inverter-Free Photovoltaic Battery System for Telecommunication Antenna Applications

Introduction

Remote telecommunication antennas increasingly rely on hybrid renewable systems to reduce operating costs and emissions while maintaining high service availability. Many of these facilities operate natively with direct current loads, which enables inverter-free architectures and introduces different operational constraints compared with conventional AC microgrids. Despite this practical relevance, the literature still shows limited evidence on data-driven prediction of DC electrical variables in real photovoltaic battery systems supplying telecom infrastructure. This study addresses this gap by proposing a machine learning-based framework to predict DC current demand and generation for a grid-assisted solar-powered telecommunication antenna. The installation is designed to operate without an AC inverter, since the load is entirely DC, which makes current prediction a key variable for energy management and battery utilization.

Methods

The case study considers a real installation located at Sucre, Colombia, equipped with approximately 20 kWp of installed photovoltaic capacity, two battery units rated at 10 kWh each, and a rectifier that enables energy intake from the utility grid when required. High-resolution operational data were collected, including solar generation, battery behavior, DC load current, and relevant meteorological variables. An exploratory data analysis stage was first conducted using statistical distributions and median-based smoothing to identify patterns, outliers, and temporal dependencies. Feature relevance was then assessed through correlation analysis between climatic variables, solar irradiance, and DC electrical measurements, allowing dimensionality reduction to the most influential predictors. Model development was carried out using the EvalML framework for automated machine learning, with additional benchmarking performed using PyCaret to ensure consistency and robustness. Multiple regression-based and tree-based models were trained and validated under identical data partitions to identify the most suitable structure for DC current prediction in this inverter-free configuration.

Results

The results indicate that the proposed framework achieves high predictive accuracy for DC load current. The best-performing model, a CatBoost Regressor, achieved a mean absolute error of 0.4204 A, a mean squared error of 0.3166 A², and a root mean squared error of 0.5615 A on the validation dataset. The coefficient of determination reached 0.9761, indicating strong explanatory power, while the mean absolute percentage error was limited to 1.62 percent. Additional validation yielded an R² of 0.9999 under controlled conditions, confirming the stability of the learned relationships. These results demonstrate that accurate DC current prediction is feasible without relying on inverter-related variables, and that parsimonious models can achieve reliable performance suitable for edge deployment in telecommunication sites.

Conclusions

This work demonstrates a practical and original data-driven approach for predicting DC current in photovoltaic battery systems supplying telecommunication antennas. By explicitly considering a DC load architecture without an inverter, the study contributes new evidence to an underexplored area of renewable-powered telecom systems. The proposed framework supports improved energy management, battery scheduling, and grid interaction decisions, while remaining computationally efficient. These results are relevant to sustainable transition strategies for critical infrastructure in our country and align with emerging applications of artificial intelligence in energy conversion systems, particularly in DC-dominant installations.

  • Open access
  • 5 Reads
An Optimized Hybrid Artificial Intelligence-based Control Strategy for Standalone Photovoltaic Systems under Complex Partial Shading Conditions
, , , ,

Artificial intelligence (AI) techniques play a crucial role in providing smart and adaptive solutions to complex problems. By combining optimization, learning, and predictive capacities, the use of these methods remains relevant for enhancing the operational behavior of engineering systems. Therefore, the integration of AI techniques into energy-conversion systems has attracted significant attention recently due to their high performance, especially in photovoltaic (PV) systems, such as Standalone mode. Classical Maximum Power Point Tracking (MPPT) methods often suffer from slow convergence and reduced tracking efficiency under severe operating conditions such as Partial Shading Conditions (PSCs). However, most existing hybrid AI-based MPPT techniques rely on continuous metaheuristic optimization processes that increase computational complexity in standalone PV systems under rapidly varying PSCs. The current paper proposes an AI-combined MPPT control technique based on an Improved Perturb and Observe (IP&O) algorithm with a Grey Wolf Optimizer (GWO) for maximizing PV power extraction under these highly complex, non-uniform irradiance conditions, which emulate realistic operating conditions. Unlike traditional P&O, the suggested MPPT employs the GWO algorithm as an adaptive layer to dynamically tune the IP&O step size, enabling faster convergence while reducing the computational burden. At the same time, this improved AI-based method rapidly regulates the duty cycle of the Boost converter. The configuration of the standalone PV system comprises a PV array, a DC–DC boost converter, and a resistive DC load, regulated by the suggested GWO-IP&O approach, which is simulated in MATLAB/Simulink environment, version 2020b. The findings indicate that the proposed control technique achieves a tracking efficiency above 97%, reducing convergence time by approximately 30% and decreasing steady-state oscillations by nearly 50% compared to traditional P&O. Furthermore, it maintains the overall stability around the operating points, which significantly minimizes the PV power losses. The novelty of this work lies in its integration of the bio-inspired GWO optimization algorithm for adaptively tuning the parameters of the IP&O, enabling an improved performance. The obtained results demonstrate the superior potential of bio-inspired artificial intelligence applications, which provide a powerful solution for improving advanced MPPTs, and can be applied to achieve intelligent energy conversion in standalone PV mode in terms of robustness and reliability when exposed to complex PSCs.

  • Open access
  • 11 Reads
AI-Optimized Pyrolysis: A Machine Learning Framework for Predictive Waste-to-Energy Conversion

Introduction:

Global municipal solid waste (MSW) generation is projected to reach 3.4 billion tons annually by 2050. While pyrolysis offers a sustainable waste-to-energy alternative to landfilling, its industrial application is hindered by the complexity of heterogeneous feedstocks and the limited availability of reliable predictive optimization tools. This research addresses these challenges by developing a scalable, data-driven framework that integrates artificial intelligence with thermochemical principles.

Methods:

Using a dataset of 619 experimentally validated scenarios across 75 feedstock types, we benchmarked XGBoost against Random Forest (RF), Support Vector Regression (SVR), and Multi-Layer Perceptron (MLP) models. Model selection was based on predictive accuracy (R²), robustness across diverse feedstocks, and stability under cross-validation. XGBoost was chosen due to its superior performance in capturing non-linear relationships in high-dimensional chemical data. The dataset was split using an 80/20 train–test scheme, ensuring representative coverage of different feedstock categories in both sets, and model reliability was further assessed using stratified 5-fold cross-validation. Physics-informed constraints were applied post-training to enforce mass balance (total yield ≈ 100%) and thermochemical consistency, ensuring adherence to fundamental physical laws.

Results:

The framework achieved strong predictive performance, with R² values of 0.78 for syngas, 0.76 for bio-oil, and 0.59 for biochar. Reactor type emerged as a significant predictor: fluidized-bed reactors showed higher bio-oil yields due to enhanced heat and mass transfer compared to fixed-bed systems, while auger reactors favored biochar formation under lower heating rates. Temperature remained the dominant operational variable, with an optimal bio-oil production region identified near 526 °C and a heating rate of approximately 100 °C/min.

Conclusions:

This study introduces a novel, physics-informed machine-learning framework that combines algorithm benchmarking, feedstock-diverse validation, and thermodynamic constraints for pyrolysis modeling, addressing key limitations in existing AI-based pyrolysis research. Rather than providing fixed emission reduction values, the model offers a pathway toward improved process efficiency and informed operational optimization. By enabling data-driven reactor and condition selection, this work supports the development of smarter and more sustainable waste-to-energy systems within a circular economy framework.

  • Open access
  • 8 Reads
Benchmarking Deep Learning Techniques for Photovoltaic Output Prediction: A Case Study of PV Systems in China

Accurate forecasting of photovoltaic (PV) power output plays a critical role in the reliable integration of solar energy into modern power grids and in the optimal operation and management of renewable energy systems. Nevertheless, achieving high prediction accuracy remains challenging due to the inherent variability and stochastic nature of meteorological conditions, including solar irradiance, ambient temperature, and atmospheric dynamics. These uncertainties significantly affect PV power generation and necessitate the use of advanced data-driven modeling techniques capable of capturing complex nonlinear and temporal relationships. In this study, a comprehensive benchmarking analysis of four widely used deep learning architectures, Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU), is conducted for photovoltaic power forecasting. The evaluated models represent both feedforward and recurrent learning paradigms, allowing for a systematic comparison of their predictive capabilities in modeling PV power dynamics. The models are trained and tested using real-world operational data collected from a grid-connected PV plant in China, comprising meteorological variables and historical PV power output measurements. Prior to model training, appropriate data preprocessing and normalization steps are applied to ensure consistency and robustness. PV power output is predicted at time t using a supervised learning framework, enabling short-term forecasting under real operating conditions. Furthermore, the proposed framework is inherently flexible and can be extended to medium-term forecasting horizons of up to 15 days by integrating Numerical Weather Prediction (NWP) data as external inputs. Model performance is quantitatively assessed using widely accepted evaluation metrics, including Root Mean Squared Error (RMSE), Mean Absolute Percentage Error (MAPE), and the coefficient of determination (R²), ensuring a fair and transparent comparison across all architectures. The experimental results clearly indicate that recurrent neural network-based models, particularly the LSTM architecture, consistently outperform feedforward and convolutional models. This superior performance is attributed to their ability to effectively capture temporal dependencies and long-term patterns in PV power time series data. Compared with existing comparative studies, this work provides a unified and rigorous benchmarking framework in which all models are evaluated under identical datasets, forecasting settings, and validation criteria. The findings offer valuable practical insights into the selection and deployment of deep learning models for real-world PV power forecasting applications and support informed decision-making for grid operators and energy planners.

  • Open access
  • 9 Reads
Adaptive State of Energy Estimation for Lithium-Ion Batteries Using an Improved Sage–Husa Extended Kalman Filter

Accurate State of Energy (SOE) estimation is a critical prerequisite for ensuring the safety, reliability, and optimal performance of lithium-ion batteries (LIBs) in modern Battery Management Systems (BMSs). While model-based approaches, such as the standard Extended Kalman Filter (EKF), are widely utilized in industrial applications, their precision is fundamentally compromised by a reliance on static, pre-defined noise covariance matrices (Q and R). These static parameters often fail to account for the highly non-linear, time-variant electrochemical behavior of LIBs under dynamic operating conditions, frequently leading to filter divergence and estimation lag. This research presents an enhanced state estimation framework utilizing an Improved Sage–Husa Extended Kalman Filter (SHEKF) to address these limitations. A second-order (2RC) equivalent circuit model (ECM) is chosen in this research for its optimal balance between computational efficiency and high-fidelity representation of battery dynamics. To ensure a precise model foundation, internal parameters including ohmic resistance, charge transfer, and mass transfer effects, were identified through offline analysis of Hybrid Pulse Power Characterization (HPPC) test data. The non-linear relationship between Open-Circuit Voltage (OCV) and SOE was further refined using a sixth-order polynomial fit to minimize model-induced errors. The proposed SHEKF algorithm incorporates a recursive adaptive mechanism that utilizes filter innovation to estimate and update process and measurement noise statistics in real time. This eliminates the need for manual parameter tuning and allows the estimator to maintain stability across varying current rates and drive cycles. The robustness of the SHEKF was rigorously validated through a comparative analysis against baseline EKF and Strong Tracking EKF (STEKF) algorithms. Evaluations were conducted using standardized dynamic datasets, specifically the Federal Urban Driving Schedule (FUDS) and the Urban Dynamometer Driving Schedule (UDDS). Results obtained demonstrate that the SHEKF significantly outperforms traditional estimators, achieving a 75% reduction in Root Mean Square Error (RMSE) for both SOE and terminal voltage estimation compared to the baseline EKF. Specifically, the SHEKF maintained an exceptionally low SOE RMSE of 0.58% and a Maximum Absolute Error (MAE) below 0.84%, even under the volatile current profiles of the UDDS cycle. These findings confirm that the adaptive Sage–Husa mechanism provides a superior, self-correcting solution for high-precision battery state monitoring in real-world electric vehicle applications.

  • Open access
  • 6 Reads
A Hybrid Artificial Bee Colony with Adaptive Neighborhood Search and Gaussian Perturbation Integrated NSGA-II for Multi-Objective Probabilistic Optimal Power Flow Considering Solar, Wind, Electric Vehicles, and FACTS Devices

The large-scale integration of renewable energy sources (RESs) and electric vehicles (EVs) has significantly increased the operational uncertainty, nonlinearity, and dimensionality of modern power systems, thereby challenging the effectiveness of conventional deterministic optimal power flow (OPF) techniques. To address these challenges, probabilistic optimal power flow (POPF) has emerged as a reliable framework capable of explicitly modeling the stochastic behavior of renewable generation and flexible EV loads while ensuring secure and economical system operation. In this paper, a novel hybrid multi-objective POPF framework is proposed by coupling an Artificial Bee Colony (ABC) algorithm enhanced with Adaptive Neighborhood Search (ANS) and Gaussian Perturbation (GP) strategies with the Non-dominated Sorting Genetic Algorithm II (NSGA-II). The proposed framework simultaneously minimizes total fuel cost and pollutant emissions under AC power flow equality constraints and practical operational limits. Uncertainties associated with wind speed, solar irradiance, and EV charging/discharging behaviors are modeled using appropriate probability density functions, enabling a realistic representation of intermittency and demand-side flexibility. In addition, Flexible AC Transmission System (FACTS) devices, namely the Thyristor Controlled Series Compensator (TCSC) and Thyristor Controlled Phase Shifter (TCPS), are optimally incorporated to enhance voltage regulation, reduce transmission congestion, and improve overall system security. The hybrid ABC–ANS–GP mechanism strengthens global exploration through adaptive neighborhood control, while Gaussian perturbations effectively mitigate premature convergence in high-dimensional search spaces. The embedded NSGA-II ensures robust Pareto dominance ranking and diversity preservation, resulting in well-distributed and convergent trade-off solutions. The effectiveness of the proposed approach is evaluated on the IEEE 57-bus test system under three study scenarios: conventional POPF, POPF with RES and EV integration, and POPF with combined RES, EV, and FACTS deployment. Comparative analyses against the recently reported Quasi-Oppositional Artificial Hummingbird Algorithm (QOAHA) demonstrate that the proposed framework achieves an average reduction of 7–10% in total generation cost, a 9–13% emission reduction, and a 15–25% improvement in convergence speed. Furthermore, statistical assessments over multiple independent trials reveal a 35–45% reduction in solution dispersion, confirming superior robustness and consistency under uncertainty. These results validate the proposed framework as a scalable and effective solution for large-scale, uncertainty-aware, multi-objective power system optimization.

  • Open access
  • 7 Reads
AI-Driven Neuro-Fuzzy (ANFIS) Control for Small-Scale DC Wind Turbine Systems in Uzbekistan

Small-scale wind turbines (SWTs) are a hopeful way to bring stable energy to Uzbekistan's remote towns. Many rural homes are still only poorly connected to the national grid or not connected at all. In these kinds of systems, proportional–integral (PI) control is not enough to make sure of tight DC-bus regulation because the wind is not always the same, the turbine and generator do not always move in the same way, and the DC loads change all the time. This paper suggests an intelligent, metaheuristic-optimized driver for a small-scale DC wind energy conversion system. The goal is to achieve a stable 48 V, 1 kW DC supply for bringing electricity to rural areas. The setup that was looked at has a horizontal-axis SWT connected to a permanent-magnet DC generator and a DC–DC buck converter that supplies a 48 V DC bus. The aerodynamic, electrical, and power electronic subsystems are modeled in MATLAB/Simulink using real wind data from the Bukhara region in 2024 (NASA POWER), along with hub-height adjustment and IEC 61400-1 Kaimal-spectrum turbulence reconstruction to show how the wind really changes speed at high frequencies. Using an ANFIS-based SWT model that has already been proven to work in rural Uzbekistan, this study adds a Grey Wolf Optimizer-tuned Adaptive Neuro-Fuzzy Inference System (GWO–ANFIS) controller that works better and is more stable than a well-tuned PI controller.
The suggested controller uses an ANFIS structure that has two inputs, the DC voltage of the generator and the power of the turbine, and a single output, the buck converter's duty-cycle command. Generalized bell-shaped membership functions and a Sugeno-type rule base can also be used. The Grey Wolf Optimizer does a global search over the premise and consequent parameters. To obtain the best results, the optimization reduces a mixed objective function that includes DC-bus voltage tracking error, settling time, and RMS voltage noise. This builds power-quality standards directly into the learning process. The controller's performance is tested with step changes in wind speed, IEC-compliant unstable wind profiles, and step-changing DC loads that are typical of what people use in rural areas.

The simulations show that the GWO–ANFIS controller performs a lot better than the PI controller in all situations. GWO–ANFIS demonstrates faster settling and smaller delay when wind speed changes and it keeps the 48 V reference over a wide input voltage range (about 168–530 V). When there is a lot of wind, the improved driver cuts down on DC-bus voltage noise and improves disturbance avoidance. During load steps, the DC voltage stays very stable, and the output current follows the power demand without oscillating.

These results show that using neuro-fuzzy control along with metaheuristic optimization based on nature creates a strong AI-driven control strategy for small-scale DC wind energy systems. The suggested GWO–ANFIS processor improves the voltage, responds more quickly, and is more reliable without adding to the complexity of the hardware. This makes it a good choice for a low-cost, stable DC micro-power source in rural Uzbekistan and other places with low to medium wind.

Top