Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 3 Reads
Real-Time Surface Roughness Analysis in Milling Using Acoustic Emission Signals for Industry 4.0 Applications

In the expansion of Industry 4.0, many automation processes are being enhanced as means of accomplishing higher productivity goals. With the prospect of new achievable objectives, the demand for faster and more reliable resources processing methods are also needed. Similarly, machining processes have also been improved with the development of IoT devices by streamlining operations, enabling predictive maintenance, and providing real-time data for better decision-making, collaborating with such productivity levels. For instance, in metal milling, IoT-based sensors techniques are being developed and proved efficient in increasing speed and reliability, whereas reducing system invasiveness and complexity, which grants more profitability. The present paper proposes a real-time metal roughness average (Ra) analysis method based on Acoustic Emission (AE), which indi-rectly estimates roughness through signal processing and feature extraction of the EA signal through Power Spectral Density (PSD) evaluation. The experimental setting con-sists of a steel workpiece in which straight lines were milled with four distinct roughness levels (6 μm, 12 μm, 18 μm and 24 μm, produced by defined milling parameters), and the method was able to estimate the Ra with error under 7%. This work aims to contribute to the real-time monitoring of surface roughness in alignment with Industry 4.0 require-ments, by demonstrating the effectiveness of IoT-based solutions, and the potential of Acoustic Emission in machinery sensing and process automation.

  • Open access
  • 2 Reads
Autonomous Traffic Monitoring Pedestrian Crossing Detection with Motion Sensors and Maintenance Decision System in the Smart Cities by Using YOLOV8

The rapid expansion of urban infrastructure and the complexity of analyzing vehicle sudden movements in traffic systems necessitate autonomous traffic monitoring solutions for intelligent, autonomous traffic management. However, traditional methods like manual monitoring and static rule-based detection often fail to meet the real-time requirements of a modern city, resulting in inefficient congestion management, pedestrian safety issues, and inadequate road maintenance. The conventional approaches are highly dependent on human intervention and predefined algorithms, and cannot consequently adapt to dy-namic traffic and the unpredictable movement of pedestrians. With urban populations on the rise, there is an urgent need for Artificial Intelligence-driven solutions that can effec-tively process large volumes of real-time data to ease traffic management and deci-sion-making. This study presents an AI-based traffic monitoring framework that integrates deep learning and natural language processing (NLP) models for improved traffic safety, anomaly detection, and infrastructure optimization. The system comprises high-accuracy object detection with YOLOv8, adaptive pedestrian crossing recognition with Few-Shot Learning (FSL), and contextual analysis and real-time decision-making with LLaMA 3.2B. Through the use of these technologies, along with a BDD100K available dataset, the sys-tem achieves high detection accuracy of 96%, low inference time of 75.1 ms and improved adaptability compared to SOTA of 89%. Results indicate the suitability of AI-driven methods for thoughtful city planning and autonomous mobility, with the potential for AI-driven frameworks to improve urban traffic management by increasing its efficiency and safety.

  • Open access
  • 2 Reads
Hand Gesture to Sound: A Real-Time DSP-Based Audio Modulation System for Assistive Interaction
, , , ,

This paper presents the design, development, and evaluation of an embedded hardware and digital signal processing (DSP) based real-time gesture-controlled system. The system architecture utilizes an MPU6050 inertial measurement unit (IMU), Arduino Uno micro-controller, and Python-based audio interface to recognize and classify directional hand gestures, and transform them into auditory commands. Wrist tilts, i.e., left, right, forward, and backward, are recognized using a hybrid algorithm that uses thresholding, moving average filtering, and low-pass smoothing to remove sensor noise and transient errors. Hardware setup utilizes I2C-based sensor acquisition, onboard preprocessing on Arduino, and serial communication with a host computer running a Python script to trigger audio playing using the playsound library. Four gestures are programmed for basic needs: Hy-dration Request, Meal Support, Restroom Support, and Emergency Alarm. Experimental evaluation, conducted over more than 50 iterations per gesture in controlled laboratory setup, resulted in a mean recognition rate of 92%, with system latency of 120 to 150 milli-seconds. The approach has little calibration costs, is low-cost, and offers low-latency per-formance comparable to more advanced camera-based or machine learning-based meth-ods and is therefore suitable for portable assistive devices.

  • Open access
  • 2 Reads
IoT-Enabled Sensor Glove for Communication and Health Monitoring in Paralysed Patients
, ,

Due to their limited mobility and vocal limitations, paralysed individuals frequently struggle with communication and health monitoring. This work introduces an Internet of Things (IoT)-based system that combines continuous health monitoring with a sen-sor-based smart glove to enhance patient care. The glove detects falls, sends emergency messages via hand gestures, and monitors vital indicators, including SpO2, heart rate, and body temperature. The smart glove uses Arduino and ESP8266 modules with MPU6050, MAX30100, LM35, and flex sensors for these functions. MPU6050 detects falls precisely, while MAX30100 and flex sensors measure gestures, SpO2, heart rate, and body tempera-ture. The flex sensor interprets hand motions as emergency alerts sent via Wi-Fi to a cloud platform for remote monitoring. The experimental results confirmed the superiority and validated the efficacy of the suggested module. Scalability, data logging, and real-time ac-cess are guaranteed by IoT integration. The actual test cases were predicted using a Sup-port Vector Machine, achieving an average accuracy of 81.98%. The suggested module is affordable, non-invasive, easy to use, and appropriate for clinical and residential use. The system meets the essential needs of disabled people, enhancing both their quality of life and carer connectivity. Advanced machine learning for dynamic gesture detection and telemedicine integration is a potential future improvement.

  • Open access
  • 4 Reads
Smart Cattle Behavior Sensing with Embedded Vision and TinyML at the Edge

Accurate real-time monitoring of cattle behavior is essential for enabling data-driven
decision-making in precision livestock farming. However, existing monitoring solutions
often rely on cloud-based processing or high-power hardware, which are impractical
for deployment in remote or low-infrastructure agricultural environments. There is a
critical need for low-cost, energy-efficient, and autonomous sensing systems capable of
operating independently at the edge. This paper presents a compact, sensor-integrated
system for real-time cattle behavior monitoring using an embedded vision sensor and
a TinyML-based inference pipeline. The system is designed for low-power deployment
in field conditions and integrates the OV2640 image sensor with the Sipeed Maixduino
platform, which features the Kendryte K210 RISC-V processor and an on-chip neural
network accelerator (KPU). The platform supports fully on-device classification of cattle
postures using a quantized convolutional neural network trained on the publicly available
cattle behavior dataset, covering standing and lying behavioral states. Sensor data is
captured via the onboard camera and preprocessed in real time to meet model input
specifications. The trained model is quantized and converted into a K210-compatible
.kmodel using the NNCase toolchain, and deployed using MaixPy firmware. System
performance was evaluated based on inference latency, classification accuracy, memory
usage, and energy efficiency. Results demonstrate that the proposed TinyML-enabled
system can accurately classify cattle behaviors in real time while operating within the
constraints of a low-power, embedded platform, making it a viable solution for smart
livestock monitoring in remote or under-resourced environments.

  • Open access
  • 1 Read
AI/ML-Enabled Internet of Medical Things (IoMT) for Personalized Cardiac Health Monitoring and Predictive Diagnostics
, , ,

Cardiovascular diseases (CVDs) are a major cause of global mortality, underscoring the need for intelligent and accessible cardiac health monitoring. This paper proposes a non-wearable Internet-of-Medical-Things (IoMT) system combining real-time sensing, edge processing, and AI-driven diagnostics. Stationary sensors MAX30102 (heart rate, SpO2) and AD8232 (ECG) interfaced with micro-controller (ESP8266), processes data locally and feeds into the machine learning models trained on UCI Cleveland dataset. Random Forest and XGBoost achieved over 80% accuracy in predicting early cardiac risk. A Flask-SQLite web application provides role-based doctor/patient access, and a Natural Language Pro-cessing (NLP) based interactive chatbot offers personalized guidance. The system delivers scalable, real-time, edge-enabled cardiac diagnostics without relying on wearable devices.

  • Open access
  • 12 Reads
A Three-Stage Transformer-Based Approach for Food Mass Estimation

Accurate food mass estimation is a key component of automated calorie estimation tools, and there is growing interest in leveraging image analysis for this purpose due to its ease of use and scalability. However, current methods face important limitations. Some rely on 3D sensors for depth estimation, which are not widely accessible to all users, while others depend on camera intrinsic parameters to estimate volume, reducing their adaptability across different devices. Furthermore, AI-based approaches that bypass these parameters often struggle with generalizability when applied to images captured using diverse sen-sors or camera settings. To overcome these challenges, we introduce a three-stage, trans-former-based method for estimating food mass from RGB images, balancing accuracy, computational efficiency, and scalability. The first stage applies the Segment Anything Model (SAM 2) to segment food items in images from the SUECFood dataset. Next, we use the Global-Local Path Network (GLPN) to perform monocular depth estimation (MDE) on the Nutrition5k dataset, inferring depth information from a single image. These outputs are then combined through alpha compositing to generate enhanced composite images with precise object boundaries. Finally, a Vision Transformer (ViT) model processes the composite images to estimate food mass by extracting relevant visual and spatial features. Our method achieves notable improvements in accuracy compared to previous approach-es, with a mean squared error (MSE) of 5.61 and a mean absolute error (MAE) of 1.07. No-tably, this pipeline does not require specialized hardware like depth sensors or multi-view imaging, making it well-suited for practical deployment. Future work will explore the in-tegration of ingredient recognition to support a more comprehensive dietary assessment system.

  • Open access
  • 2 Reads
Evaluating Voice Biomarkers and Deep Learning for Neurodevelopmental Disorder Screening in Real-World Conditions

Voice acoustics have been extensively investigated as potential non-invasive markers for Autism Spectrum Disorder (ASD). Although many studies report high accuracies, they typically rely on highly controlled clinical protocols that reduce linguistic variability. Their data is also recorded using specialized microphone arrays that ensure high quality recordings. Such dependencies limit their applicability in real-world or in-home screening contexts. In this work, we explore an alternative approach designed to reflect the require-ments of mobile-based applications that could assist parents in monitoring their children. We use an open-access dataset of naturalistic storytelling, extracting only the speech seg-ments in which the child is speaking. We applied previously published ASD voice-analysis pipelines to this dataset which yielded suboptimal performance under these less controlled conditions. We then introduce a deep learning–based method that learns discriminative representations directly from raw audio, eliminating the need for manual feature extraction while being more robust to environment noise. This approach achieves an accuracy of up to 77% in classifying children with ASD, children with Atten-tion Deficit Hyperactivity Disorder (ADHD), and neurotypical children. Frequency-band occlusion sensitivity analysis on the deep model revealed that ASD speech relied more heavily on the 2000–4000 Hz range, TD speech on both low (100–300 Hz) and high (4000–8000 Hz) bands, and ADHD speech on mid-frequency regions. These spectral patterns may help bring us closer to developing practical, accessible pre-screening tools for parents.

  • Open access
  • 2 Reads
Enhancing Fire Alarm Systems Using Edge Machine Learning for Smoke Classification and False Alarm Reduction

A Traditional fire alarm systems use smoke sensors to monitor the concentration of smoke particles in the air. If the concentration exceeds a certain threshold, an alarm signal is triggered. However, this detection process could lead to false fire alarms, causing unnec-essary evacuations and panic among residents. False alarms may result from activities such as smoking in non-smoking areas, burning Oud, or cooking smoke. In this study, a deep neural network (DNN) model was trained to classify three types of smokes that were Oud, Cigarette, and burning tissue smokes. The offline prediction accuracy of this model was 97.5%. The size of the model after converting it to TensorFlow lite was 4.7 Kbytes. It can be also converted to tiny model to deploy it on microcontroller.

  • Open access
  • 1 Read
Smart IoT-Based COVID-19 Vaccine Supply Chain, Monitoring, and Control System

This research paper presents a smart IoT-based COVID-19 vaccine supply chain, moni-toring, and control system. This proposed system is designed to efficiently and effectively monitor COVID-19 vaccine storage sites by tracking their temperature, humidity, quantity, and location on a map across various supply chain categories. It ultimately aims to moni-tor and control temperatures outside the range at the tracked location. The approach uti-lized temperature, humidity, and ultrasonic sensors, a GPS module, a Wi-Fi module, and an Arduino Uno microcontroller. The system was designed and implemented using Ar-duino and Proteus integrated design environments (IDEs) and coded using the embedded C/C++ programming language. A real-life working system prototype was designed and implemented. The measured sensor readings can be viewed via a computer system capa-bility or any mobile device, such as an Android phone, iPhone, iPad, or laptop, with the aid of a cloud-based platform, namely, Thingspeak.com. The experimentally measured sensor readings are stored in a data log file for subsequent download and analysis when-ever the need arises. The data aggregation and analytics are coded using MATLAB and viewed as charts, and the location map of vaccine carrier coordinates is sent to the web cloud for tracking. An alarm message is sent to the monitoring and control system if an unfavorable vaccine environment exists in either the store or the carrier container. A suita-ble sensor-based interface architecture and web portal are provided, allowing health prac-titioners to remotely monitor the vaccine supply chain system. This method encourages health workers by reducing the high levels of supervision required by vaccine supervisors to ensure the smooth supply of vaccines to vaccine collection centers, by using a wireless sensor network and IoT technology. Experimental results from the implemented system prototype demonstrated the benefits of the proposed approach and its possible real-life health monitoring applications.

Top