Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 10 Reads
Optimization Techniques for Convolutional Neural Network Architectures Applied to PMSM Motor Diagnostics

Currently, techniques for diagnostics of the stator winding faults in Permanent Magnet Synchronous Motors (PMSMs) increasingly rely on deep learning models, particularly Convolutional Neural Networks (CNNs), due to their ability to directly process diagnostic data and detect patterns indicative of faults. However, many CNN architectures proposed in the literature are highly complex, with an excessive number of neuron connections that exceed the requirements of the specific fault detection tasks. This complexity can negatively impact the practical implementation and real-time application of such systems.

In this study, we present an optimization approach to reduce the number of connections in a CNN applied to PMSM fault detection and classification. The proposed optimization algorithm utilizes information about the correlation between automatically extracted fault symptoms at different layers of the network. By evaluating the statistical repeatability of features within the network’s architecture, the algorithm selectively eliminates redundant connections and neurons that do not contribute significantly to the fault detection process.

As a result, the optimized CNN requires fewer parameters while maintaining high classification accuracy. This reduction in network complexity also leads to improved response speed, which is crucial for real-time monitoring of PMSM motors. The proposed method ensures that the diagnostic system can quickly identify and classify faults in the stator, enabling more efficient maintenance and reducing the risk of motor failure. The results demonstrate the effectiveness of the optimization technique in both improving performance and enhancing the practical feasibility of CNN-based diagnostic systems for PMSM motors.

  • Open access
  • 120 Reads
A Methodological Survey of Autonomous Mobile Robots and Automated Guided Vehicles in Industrial Logistics

Automated Guided Vehicles (AGVs) and Autonomous Mobile Robots (AMRs) are among the key enabling technologies driving intelligent logistics and industrial automation. Despite their widespread adoption and rapid technological evolution, the literature often addresses AGV and AMR systems in a fragmented manner, lacking a structured methodological perspective that highlights their architectural foundations, levels of autonomy, and technological maturity. This paper presents a methodological survey of AGV and AMR technologies, focusing on system-level architectures and core functional components rather than isolated algorithms. The survey systematically analyzes key technological dimensions, including sensing and perception, localization and positioning strategies, navigation and path-planning approaches, communication infrastructures, and multi-robot coordination mechanisms. A clear distinction is drawn between classical AGV systems, which rely on fixed infrastructure and predefined routes, and AMR systems, which exhibit adaptive, perception-driven, and self-configuring behaviors enabled by artificial intelligence techniques. Rather than proposing new algorithms, this paper organizes existing approaches into a coherent framework that highlights technological transitions from infrastructure-dependent guidance to autonomous, data-driven navigation. Recent trends such as cloud–edge integration, learning-based navigation, scalable fleet management architectures, and cooperative multi-robot systems are reviewed and discussed from a methodological standpoint, emphasizing their role in increasing flexibility, robustness, and operational efficiency in industrial and logistics environments. The survey also addresses cross-cutting challenges, including system transparency, safety and certification, interoperability, and sustainability. Finally, the paper outlines research directions aligned with the principles of Industry 5.0, highlighting the need for human-centered, resilient, and scalable AMR and AGV systems capable of safe and explainable operation in complex industrial contexts.

  • Open access
  • 112 Reads
Benchmarking Classical and Predictive Control Strategies for AMR/AGV Systems Using a Modular MATLAB Framework

The use of Autonomous Mobile Robots (AMRs) and Automated Guided Vehicles (AGVs) is increasingly relevant in industrial and logistics environments, particularly during the early stages of system development, where control, navigation, and coordination strategies must be evaluated efficiently and have low implementation overhead. This paper presents a modular and transparent MATLAB-based simulation framework designed to support early-stage modeling, comparison, and validation of AMR/AGV behaviors under realistic operational conditions. The proposed framework is implemented exclusively in base MATLAB, without relying on dedicated Robotics or Control System Toolboxes, ensuring accessibility, reproducibility, and full transparency of the underlying models. The framework integrates several functional modules, including: (i) adaptive velocity profiling to highlight behavioral differences between AMR and AGV motion characteristics; (ii) trajectory tracking using classical Proportional–Integral–Derivative (PID) control and Model Predictive Control (MPC); (iii) obstacle avoidance based on artificial potential fields; (iv) a simplified SLAM-inspired occupancy grid mapping approach; and (v) a basic demonstration of multi-robot interaction and coordination. While no novel control or navigation algorithms are introduced, the framework enables a consistent and parametric comparison of classical and predictive control strategies within a unified simulation environment. Simulation results indicate that MPC-based control provides improved trajectory tracking accuracy and smoother motion, with reduced oscillatory behavior when compared to PID control, particularly under dynamic constraints. The obstacle avoidance and mapping modules support safe navigation in partially known environments, while the multi-robot demonstration illustrates scalable interaction principles applicable to fleet-level studies. Overall, the proposed framework constitutes a scalable and computationally efficient foundation for early-stage AMR/AGV research, benchmarking, and education, and provides a structured basis for future extensions involving advanced perception, coordination, and optimization strategies.

  • Open access
  • 4 Reads
An Intelligent Deep Learning Assisted ABC–NSGA-II Algorithm for Multi-Objective Directional Overcurrent Relay Coordination in Smart Grid

The high penetration of distributed generation, inverter-interfaced renewable energy sources, and dynamic microgrid operation has drastically transformed short-circuit behavior, leading to frequent miscoordination of conventional protection schemes. Bidirectional fault currents, variable fault levels, and network reconfiguration further complicate the coordination of directional overcurrent relays (DOCRs). To overcome these challenges, this paper proposes a deep learning–enabled hybrid Artificial Bee Colony (ABC) and NSGA-II–based multi-objective optimal protection coordination (DL–ABC–NSGA-II MO-OPC) framework for renewable-integrated power systems. The protection coordination problem is formulated as a constrained multi-objective optimization model, aiming to: (i) minimize the total operating time of primary and backup relays, (ii) maximize coordination margins under coordination time interval (CTI) constraints, and (iii) enhance protection security under bidirectional inverter-dominated fault currents and multiple network topologies. The decision variables include time multiplier settings (TMSs), plug setting currents (PSCs), and relay curve characteristics. A deep learning model is embedded within the ABC search process to predict promising regions of the search space, accelerate convergence, and adaptively tune control parameters under varying fault and loading conditions. The refined solutions are then evolved using NSGA-II to generate a well-distributed Pareto-optimal front. The proposed DL–ABC–NSGA-II framework is validated on a modified benchmark power network under grid-connected and islanded modes, considering multiple fault types and renewable penetration scenarios. Simulation results confirm a substantial reduction in overall relay operating time, complete elimination of miscoordination, and strong robustness against renewable-induced fault current uncertainty. Comparative analysis with conventional coordination and recent metaheuristic-based approaches demonstrates the superior convergence speed, enhanced solution diversity, and improved protection reliability of the proposed scheme. The proposed deep learning–assisted protection coordination strategy provides a scalable, intelligent, and cyber-resilient solution for next-generation smart grids and microgrids with high renewable energy penetration.

  • Open access
  • 91 Reads
AI-Powered Computer Vision Industrial Quality Inspection Systems: A Practice Review

Computer vision (CV) systems driven by artificial intelligence (AI) are gradually replacing manual, real-time, and data-driven processes in industrial quality inspection, enabling automated decision-making based on visual data. Conventional inspection procedures are usually limited in terms of scalability, human-related errors, and high operational costs, which drives the increasing reliance on smart vision-based technologies. A practical, practice-oriented review of AI-based computer vision systems for industrial quality control is provided in this paper, with emphasis on real-world deployment issues and performance aspects. Two representative industrial case studies are examined. The first investigates the use of real-time extrusion monitoring in robotic building construction, where geometric deviations, surface defects, and process inconsistencies are detected during material deposition using deep learning-based vision models. The second case study focuses on automated inspection of bolts and screws in manufacturing lines, addressing presence detection, orientation recognition, and defect classification under high-speed production conditions. In both cases, widely adopted AI techniques, including convolutional neural networks, image processing pipelines, and edge-computing hardware, are discussed and compared. The analysis shows that AI-enabled computer vision systems significantly outperform traditional rule-based or manual solutions in terms of inspection accuracy, consistency, and throughput. Nevertheless, challenges related to dataset quality, model generalization, lighting variability, and real-time computational constraints remain critical in industrial environments. In conclusion, AI-based computer vision plays a central enabling role in intelligent quality inspection within the context of Industry 5.0. Future research should focus on adaptive model capabilities, tighter integration with cyber-physical systems, and scalable deployment strategies to achieve reliable and autonomous inspection across diverse industrial sectors.

  • Open access
  • 7 Reads
Identification of Process Indices in Elastic Emission Machining Using Piezoelectric Diaphragm (PZT) Sensors

Introduction: Efficient detection of subsurface damage (SSD) is essential to ensure the service life and performance of machined components, yet conventional identification methods are predominantly destructive. Elastic Emission Machining (EEM) stands out as a non-contact process that removes material at the atomic scale through chemical reactions, producing mirror-like finishes. This technique can be utilized for SSD detection by generating spherical cap-shaped imprints that allow access to and evaluation of the material's integrity below the surface. Methods: This study presents an experimental analysis of the EEM process on glass specimens using piezoelectric diaphragm (PZT) sensors for in situ signal acquisition. A 2k-p fractional factorial design was implemented to evaluate the influence of variables such as tool rotation (47 to 67 Hz), tool material hardness, tool finish (CNC vs. ultra-precision), applied load (260 and 520 g), and testing time. Signals were processed using Fast Fourier Transform (FFT) and Root Mean Square (RMS) analysis to extract quantitative process indices. Results: The results demonstrated that the tool's surface finish is directly transferred to the workpiece, altering the texture of the generated caps. Frequency analysis revealed that most caps presented predominant peaks in the 20 to 32 Hz range. Notably, for tool rotations above 60 Hz, the highest peak frequency shifted to the 31–33 Hz range, or approximately 2512 Hz. Furthermore, trials that exhibited peaks at higher frequencies were correlated with lower average RMS values. Conclusions: The use of PZT sensors proved to be an effective and low-cost method for monitoring the EEM process. The generated indices successfully correlate acoustic signal characteristics with machining parameters, providing a solid perspective for future subsurface integrity characterization.

  • Open access
  • 9 Reads
Vision-Based Chessboard Perception and Coordinate Mapping for Delta Robot Pick and Place

Vision-guided robotic manipulation plays a central role in flexible and low-cost automation systems, particularly for structured pick-and-place tasks. This paper presents a proof-of-concept study focused on vision-based perception and coordinate mapping for a delta robot, using a chessboard as a structured benchmark environment. The chessboard provides a regular grid with known geometry, enabling a systematic evaluation of computer vision techniques for object localization and robot-oriented spatial mapping. The proposed framework prioritizes visual perception over advanced control, aiming to extract reliable spatial information from camera input and convert it into robot-ready coordinates. A camera-based vision pipeline is developed using OpenCV to detect the chessboard, estimate its pose through corner detection and homography, and segment individual squares of the board. Chess piece presence and position are determined through color segmentation and contour analysis, allowing square occupancy estimation and centroid extraction. Camera calibration and board-plane registration enable the transformation of image coordinates into the delta robot workspace, providing target positions for pick-and-place actions. To assess robustness and performance, classical computer vision approaches are compared with convolutional neural network-based classifiers integrated via OpenCV’s DNN module for chess piece detection and classification. The methods are experimentally evaluated under varying lighting conditions in terms of detection accuracy, processing latency, and computational load. The results highlight practical trade-offs between classical and learning-based vision techniques for structured manipulation tasks, particularly regarding robustness and real-time feasibility. Although demonstrated in a chessboard scenario, the proposed approach is directly applicable to grid-based industrial operations such as kitting, tray loading, and fixture-based assembly. This work establishes a practical foundation for vision-based coordinate mapping in delta robot pick-and-place applications and supports future extensions toward more complex perception and manipulation strategies.

  • Open access
  • 28 Reads
Imperfection Modelling in Fault-Tolerant System Design

In high-stress or crisis-driven engineering contexts, system design often defaults to idealized targets of perfection. Yet, resource limitations, time pressure, and incomplete information frequently prevent such targets from being achieved, resulting in increased cognitive load, rising error rates, and brittle system behavior. This paper critiques the dominant perfection-orientated paradigm in crisis engineering and proposes an alternative approach: designing with imperfection tolerance as a primary design constraint.

Instead of attempting to eliminate all uncertainty, the proposed framework introduces the concept of imperfection budgeting, wherein acceptable degrees of fault, performance degradation, or incomplete coverage are explicitly defined and tracked from the outset. Crucially, the study does not prescribe a fixed metric for imperfection. Rather, it introduces a multi-dimensional space for imperfection modeling, covering multiple quality-related dimensions such as functional deviation, recoverability from faults, detectability of anomalies, and system stability, which can be modeled through tolerance bands, threshold-based metrics, or qualitative categorization depending on system context.

Moreover, this imperfection-tolerant perspective aligns closely with the design logic of modern AI systems, which inherently operate under probabilistic reasoning and imperfect information. AI models, particularly those based on machine learning, do not guarantee deterministic outputs; they rely on statistical inference, often exhibiting uncertainty and error margins in real-world conditions. By acknowledging and managing these imperfections as intrinsic characteristics rather than anomalies, the proposed framework supports more transparent, resilient, and adaptive system design, whether in human-controlled crisis settings or AI-driven autonomous environments.

By shifting the focus from eliminating failure to managing it, this tolerance-based approach enables more resilient, transparent, and decision-friendly engineering under non-ideal conditions.

  • Open access
  • 9 Reads
The 3D-Printed Low-Cost 6-DoF Robot Adolfo: Technology Overview and Benchmarking

Fully articulated 6-degree-of-freedom (6-DoF) robotic manipulators remain one of the most versatile architectures in industrial robotics due to their high dexterity and capability to perform complex spatial tasks. However, their widespread adoption is often limited by cost, system complexity, and reduced accessibility for small-scale industry, research, and education. Recent advances in Additive Manufacturing (AM), particularly low-cost 3D printing technologies, are enabling new design paradigms that challenge conventional approaches to robotic arm development by increasing geometric freedom, modularity, and manufacturing accessibility. This paper presents a comprehensive technology overview and benchmarking study of the Adolfo, a low-cost 6-DoF robotic manipulator developed by Smile.Tech, whose mechanical structure is predominantly produced using 3D-printed components. The work addresses the multidisciplinary aspects of the system, including mechanical architecture, actuation strategy, control and interface software, and compatibility with virtual operation and simulation environments. A concise review of the current state of the art in industrial and collaborative 6-DoF manipulators is provided to contextualize the proposed solution. To assess the positioning of the Adolfo within the existing market landscape, a benchmarking analysis is conducted against representative commercial robotic arms, focusing on key operational and technical indicators such as payload-to-weight ratio, workspace, repeatability, structural design, cost range, and software ecosystem. The results highlight the trade-offs between performance, cost, and manufacturability inherent to low-cost, additively manufactured robotic systems, while identifying application domains where such platforms offer a competitive and flexible alternative to conventional industrial solutions. The presented study aims to support informed decision-making in the selection and development of accessible 6-DoF robotic platforms for research, education, and light industrial applications, with particular relevance for human–robot collaboration in Industry 5.0 manufacturing contexts.

  • Open access
  • 9 Reads
A Decision Framework to Select Robotics Simulators for Automation and Control Tasks: Criteria and Validation

Robotics simulation is a key enabler for automation and control development, allowing safer experimentation, faster design iterations, and reduced development cost. Nevertheless, the current simulator ecosystem is highly fragmented, spanning open-source and commercial tools with different levels of physical fidelity, performance, and ecosystem integration. As a result, simulator selection is frequently driven by familiarity or availability rather than explicit task requirements, often leading to suboptimal engineering workflows. This paper proposes a task-oriented decision framework to support reproducible and transparent selection of robotics simulators based on a fixed and structured set of evaluation criteria. These criteria cover: (i) physical fidelity and contact modeling; (ii) sensor modeling and visual realism; (iii) performance and scalability aspects, including headless execution, parallelism, and GPU acceleration; (iv) ecosystem integration with automation, control, and learning pipelines, including ROS/ROS 2 compatibility; (v) extensibility and programmability; and (vi) practical constraints such as hardware requirements, licensing models, and learning curve. The framework is operationalized through a checklist and scoring matrix guided by four key questions addressing the target task, fidelity-versus-speed priorities, target software stack, and sim-to-real transfer requirements. To validate feasibility in a representative engineering workflow, a URDF-based modeling and simulation pipeline is implemented and used to compare Gazebo, as an open-source physics-based simulator, against MATLAB/Simulink, representing a commercial model-based simulation environment. The comparison reports practical indicators, including setup effort, integration complexity, computational requirements, and runtime behavior, for representative motion and sensing scenarios. The results highlight consistent trade-offs across different user profiles and application needs, while also revealing open gaps in the field, notably the lack of unified multi-task benchmarks and joint metrics capable of simultaneously capturing simulation fidelity, computational performance, and sim-to-real transfer effectiveness.

Top