Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Hybrid Quantum-Classical Communication Techniques: Bridging the Gap Between Quantum and Classical Networks

As quantum technologies advance, the integration of quantum-safe solutions with classical systems has emerged as a critical area of research, enabling enhanced security in preparation for the advent of quantum computing and supporting the development of future quantum networks.

This presentation provides a comprehensive examination of hybrid quantum-classical communication techniques, highlighting their role in creating practical and scalable quantum-safe communication solutions. By combining the strengths of quantum technologiessuch as the unpredictability of Quantum Random Number Generation (QRNG), the robustness of Post-Quantum Cryptography (PQC), and the unparalleled security of Quantum Key Distribution (QKD)—with the well-established infrastructure of classical networks, hybrid systems offer a feasible pathway toward quantum security in the short term. Key focus areas include QRNG, which enhances the randomness and security of cryptographic keys, PQC algorithms that are resistant to quantum attacks, and QKD, which ensures secure key distribution using the principles of quantum mechanics. While these quantum technologies offer significant advantages, the integration of quantum and classical systems presents challenges, such as ensuring compatibility, managing costs, and maintaining scalability. The presentation delves into the pros and cons of these solutions, evaluating their adaptability, performance, and potential for future upgrades as quantum technologies mature.

Furthermore, the presentation underscores the real-world applications and future potential of hybrid quantum-classical systems. Current use cases, such as the Global QRNG Cloud Platform (QSaaS), providing secure random number generation to clients worldwide. We also highlight a quantum-safe encryption demonstration between data centers and the use of quantum-safe key management systems (KMS). These examples demonstrate how hybrid systems are already providing quantum-safe communication solutions and paving the way for broader adoption across industries. By leveraging these hybrid techniques, we aim to outline a clear path toward practical and effective communication solutions that will meet the security demands of the quantum computing era.

  • Open access
  • 0 Reads
Permeability-Gradient Nanocrystalline Toroidal Core with Uniform Magnetic Flux Density Distribution
, , , , ,

Inductors play a crucial role in electronic devices and are widely utilized. However, the uneven distribution of magnetic flux density (MFDD) within the toroidal core can lead to premature saturation on the inner side, ultimately decreasing material utilization. To address the issue of non-uniform MFDD, a nanocrystalline toroidal core with a permeability gradient (PG) along its radius has been proposed and manufactured. The permeability of the nanocrystalline flake ribbon (NFR) can be easily configured and controlled through a physical crushing process. A magnetic reluctance model is developed using a differential approach to explain this phenomenon. Then, the influence of the sub-layer number and permeability gradient are simulated using finite element analysis software. Following this, four NFR cores are fabricated for experimental testing, and the temperature rise is measured to indirectly assess the MFDD within the core. For the core with a single relative permeability (μ = 1500), the temperature rise is 92.2 °C on the inner side and 82.86 °C on the outer side, resulting in a maximum temperature difference of 9.34 °C. In contrast, the core with a permeability gradient (μ = 1600-2200) shows a much smaller temperature difference of only 2.51 °C. The simulation and experimental results align closely, indicating that the proposed PG-NFR core demonstrates a more uniform magnetic flux density distribution.

  • Open access
  • 0 Reads
Research on Data Encryption and Authentication Methods for Industrial Automation Based on Machine Learning
, , ,

With the close combination of modernization and digitalization, the industrial Internet has been upgraded in an all-round way. In recent years, safety accidents of industrial control system (commonly known as industrial automation) system software have occurred from time to time. The vulnerability of the endogenous security of industrial production communication protocol is one of the key reasons for the software security accidents of industrial automation system. IDS (Intrusion Detection System) is a kind of network security technology. It can monitor the activities in the network through port scanning, network traffic analysis and so on, and identify the possible intrusion behavior. By detecting abnormal activities in the network, we can detect possible intrusion behaviors, so as to detect and prevent network security attacks in time. But the intrusion detection and Defense Technology in traditional information technology can not be applied to industrial automation system software immediately. Therefore, according to the characteristics of industrial automation system software, this paper studies the intrusion detection technology suitable for industrial automation system software, uses haqspo algorithm to improve elm and SVM, and obtains ICs intrusion detection entity model based on improved elm and ICs intrusion detection entity model based on Improved SVM. Finally, compared with QPSO, PSO and GA algorithms, the ICs intrusion detection entity model improved by haqspo algorithm has stronger main performance and can better meet the requirements of specific ICs for intrusion detection. According to the simulation results, the accuracy of ICs intrusion detection model composed of stacked classifiers is higher than that of single classifier model, while the false negative rate and false negative rate are lower than that of single classifier model.

  • Open access
  • 0 Reads
Vehicle VIN Recognition Based on Deep Learning and OCR
, , , , ,

This paper proposes a deep learning-based Optical Character Recognition (OCR) system aimed at addressing the limitations of traditional Vehicle Identification Number (VIN) recognition methods in complex environments. As the unique identifier for vehicles, the VIN plays a critical role in vehicle management, registration, and tracking. However, conventional recognition methods, which rely on manual transcription or simple license plate recognition, are inefficient and prone to errors, especially in situations where there is insufficient lighting, reflections, dirt, or significant tilt angles. To overcome these challenges, the proposed system uses high-resolution cameras to capture vehicle images and applies image preprocessing techniques such as grayscale conversion and binarization to enhance image quality, ensuring that characters are clearly visible even in challenging conditions.At the core of the system is the integration of deep learning models, including Long Short-Term Memory (LSTM) networks, which automatically learn and extract key features from the images, enabling precise VIN recognition without the need for manual intervention. Compared to traditional methods based on template matching or rules, deep learning models offer greater generalization capabilities, allowing for high-accuracy recognition under various complex conditions. Additionally, the system incorporates a character verification function to ensure that the recognized VIN conforms to standard formatting and effectively distinguishes between easily confused characters, such as "0" and "O." This feature not only improves recognition accuracy but also helps prevent the misuse of charging cards, further optimizing the management and utilization of corporate vehicle resources.

  • Open access
  • 0 Reads
An Optimization of the Biomagnetism Model for the BEST Software
, ,

Recently, Chenxi Sun et al. from Peking University reported the Biomagnetism Evaluation via Simulated Testing (BEST) software [1]. This software integrates a simulated current dipole model of the heart's biomagnetic field with a convolutional neural network to assess the diagnostic performance of magnetocardiography (MCG) devices through simulation tests. It provides a novel approach for optimizing the performance of MCG systems and their sensor components. However, the original biomagnetism model in silico simplified the cardiac current dipole, causing discrepancies between the simulated and actual results. In this study, we focus on optimizing the biomagnetism model in the BEST software and introduces an electrocardiographic vector as the new biomagnetism model, offering a more accurate depiction of the cardiac current dipoles direction and magnitude. By integrating the electrocardiographic vector into the BEST software, we aim to produce spatial magnetic field simulations that more closely align with real cardiac magnetic field distributions. We compared the differences between the original and optimized models in their simulation of cardiac magnetic field distributions. The results demonstrate that the optimized model significantly improves both the accuracy and realism of the magnetic field simulations. Specifically, the magnetic field distribution generated by the electrocardiographic vector better matches theoretical expectations and real-world data in terms of spatial morphology and intensity distribution. In summary, we optimize the biomagentism model of the original BEST software by introducing the electrocardiographic vector. The enhanced accuracy in simulating cardiac magnetic field distributions will contribute to the development of a second-version BEST software, providing a more reliable simulation tool for evaluating the diagnostic performance of MCG systems. Furthermore, it offers valuable support for the performance evaluation of biomagnetic sensors and related research.

  • Open access
  • 0 Reads
A Cost-Effective Solution for Shortlines’ Rail Track Condition Monitoring: An Automated AI Rail Extraction Framework for Low-Density LiDAR data Without Sensor Configurations
, , ,

Approximately one-third of the U.S. rail network is owned and operated by shortline railroads (Class II and III), which often face challenges due to marginal infrastructure conditions, limited revenue, and a small workforce. To effectively manage their infrastructure, shortlines need a reliable and cost-effective inventory of their rail tracks. While significant advancements have been made in automatic rail extraction methods, these typically require high-density point cloud datasets with known sensor specifications, which are often unattainable for shortlines due to financial and technical constraints. To overcome these challenges, we propose a novel, configuration-independent coarse-to-fine extraction method designed specifically for low-density LiDAR data. This method leverages high-level geometric features of the rails, making it suitable for point clouds with unknown sensor properties. The integrated framework includes multiple AI methods and signal filter processing methods including slicing, peak-finding, isolation forest, DBSCAN, k-mean clustering, nearest neighbors, HLSF, and gaussian mixture model. We evaluated our framework using a grade-crossing dataset from the Federal Railroad Administration, characterized by a point cloud density of only 293 points/m². Our results demonstrate an average completeness of 96.97%, correctness of 99.71%, and quality of 96.67% across various extraction scenarios. These findings suggest that our method empowers shortlines to effectively extract rail geometry measurements from any available low-density LiDAR data.

  • Open access
  • 0 Reads
A Practical Study on Smart Makeup Mirrors for Visually Impaired Women Based on GT-AHP-FCE
,

This study aims to explore the usage needs of disabled groups for universal products and proposes an innovative design and evaluation method based on Grounded Theory (GT), Analytic Hierarchy Process (AHP), and Fuzzy Comprehensive Evaluation (FCE). The goal is to enhance the user experience and satisfaction of disabled groups when using these products, thereby promoting the development of universal products and fostering greater social support for disabled individuals. The study uses GT to deeply investigate the actual needs of disabled users, applies AHP to prioritize these needs and identify key design factors, and employs FCE to systematically evaluate the design solutions to optimize the final design. The results show that the GT-AHP-FCE method effectively identifies and meets the special needs of disabled groups in product usage, improving their user experience and satisfaction. This research explores a GT-AHP-FCE-based design and evaluation method for universal products, and validates its feasibility through the design case of a smart makeup mirror. It provides innovative theoretical and practical guidance for future designers of universal products and offers new approaches for designing products that meet the needs of disabled groups.

  • Open access
  • 0 Reads
AFORRD: An automatic and flexible circadian rhythm disturbances regulation device

The biological clock controls daily physiological, metabolic, and behavioral rhythms, with normal oscillations serving as indicators of good health. Disruptions in these rhythms increase the risk of diseases like obesity, diabetes, cardiovascular disease, and cancer. Studying biorhythmic homeostasis is crucial for health, disease prevention, and therapy development. Due to the challenges of studying patients with rhythm disorders in clinical settings, animal models are crucial for investigating these mechanisms. Compared to genetic or pharmacological interventions, dietary and light-based interventions in animal models more accurately reflect human dysrhythmia caused by unhealthy lifestyles. Therefore, we integrated electronic technology to modify the pathological modeling environment for mice, aiming to create an animal living space that more accurately replicates human circadian rhythm disorders. We call this system as automatic and flexible oscillatory rhythm regulation device (AFORRD).
In the SPF-grade cardiovascular disease model animal feeding system, the heat-permeable fabric was used to create a light-shaded space with independent feeding and ventilation ducts. A full-spectrum sunlight source, connected to a time switch, regulated light exposure, and food and water were provided during these periods. We established a rhythm-disordered experimental group and a control group. The control group followed a standard 12-hour light/12-hour dark cycle, while the experimental group experienced an inverted 12-hour dark/12-hour light cycle. After 4 weeks, mice were sacrificed, heart tissue collected, and mRNA extracted for transcriptome sequencing.
The study showed that reversed sleep-wake and feeding patterns significantly impacted core circadian gene expression, particularly Bmal1 and Clock. Our model allows for rhythm-disrupted and control mice to be housed together in an SPF environment, enhancing experimental control. The system’s flexibility in simulating various human schedules makes it a promising tool for standardized circadian rhythm research, applicable to both cardiac and other biological clock studies, aiding in the exploration of lifestyle impacts on health and disease.

  • Open access
  • 0 Reads
Multi-scale vehicle image enhancement based on hybrid chaotic particle swarm algorithm
, , , , ,

Image enhancement plays a crucial role in the process of image recognition, especially in applications such as automatic license plate recognition, where clarity and accuracy are essential for extracting precise information from images. In order to address the challenge of improving the recognition quality of license plates, this paper introduces an advanced gray-scale image enhancement technique. This method integrates the chaotic particle swarm optimization (CPSO) algorithm with the simulated annealing (SA) algorithm at different image scales, effectively optimizing the enhancement process.The method begins by decomposing the original image using the Laplace pyramid decomposition, which generates a series of multi-level images, each containing information from different scales or resolutions. By doing so, we can isolate and enhance specific image details more effectively at each level. Next, the chaotic particle swarm and simulated annealing algorithms are employed, leveraging their respective strengths in global search and local optimization. Specifically, the particle swarm algorithm provides a mechanism for exploring the parameter space, while the simulated annealing algorithm refines the solutions by preventing premature convergence. A hybrid perturbation operator is applied to the local optimal solution at each scale to further enhance the image's details and contrast.Finally, all the enhanced layers are reconstructed back into a single image, thereby completing the image enhancement process. Extensive simulation experiments were conducted, comparing this method with other traditional image enhancement algorithms. The experimental results demonstrate that the proposed technique yields superior visual quality, effectively improving image clarity and detail, which is crucial for tasks such as license plate recognition.

  • Open access
  • 0 Reads
Realization of Vibration Detection for Large Equipment based on High-speed Photography Technology
, , , , ,

All types of industrial and military equipment are affected by vibration when in operation, especially precision systems, where the effect of vibration on accuracy is particularly pronounced. The vibrations originate from the active signals of the engine or the passive signals of the environment, and in either case, the presence of vibrations affects the equipment during prolonged operation. A prerequisite for research is the measurement of the signals. The use of setting marking points on the equipment to analyze the deformation condition of the equipment is an effective means of measurement, the method is simple and easy to implement, the requirements of the equipment are simple, and the technique has been more and more engineers and technicians pay attention to [1, 2].It analyzes the deformation of the equipment in the shaking table's experimental environment by using the marking points processed by the Hough transform method as the reference and this improved method. To better measure equipment deformation due to vibration in a laboratory environment. The project team introduced an improved Hough transform method for calibration and then carried out the study with the help of a vibration platform. Through experiments, the designed method was successfully utilized to study the effects of vibration on various types of equipment. The corresponding data and results successfully proved the effectiveness of using the above process.

Top