Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Texture Classification Based on Audio and Vibro-Tactile Data
,

The tactile perception of material properties is a difficult task, but also of great importance for the skillful manipulation of objects in fields such as robotics, virtual reality and augmented reality. Given the diversity of material properties, integrated tactile perception systems require efficient extraction and classification of features from signals collected by tactile sensors. This paper focuses on the development and validation of an automatic learning system for the classification of tactile data in form of vibrotactile (accelerometer) and audio (microphone) data for texture recognition. The tests carried out have shown that among the extracted features, the combination of key compnents obtains the best results. These include the standard deviation, the mean, the absolute median of the deviation, and the energy that characterizes the power of the signal, a measure which reflects the perceptual properties of the human system associated with each sensory modality. Moreover, the Fourier characteristics extracted from the vibro-tactile and audio signals contribute to the quality of the perception. In order to reduce the dimensionality of the tactile dataset and identify the most compact models, we apply principal component analysis and a selection process of the features based on their importance. Several machine learning models including Naïve Bayes classification, the K-nearest neighbors algorithm, decision trees, random forests, support vector machines, logistic regression, neural networks, XGBoost (Extreme Gradient Boosting) and XGBRF (a combination of random forests as a framework and the XGBoost algorithm) are compared in an attempt to identify the best compromise between the number of features, the classification performance and the computation time. Moreover, we demonstrate that the choice of the sampling length from the tactile signals is an important aspect that can have a significant impact on classification accuracy.

  • Open access
  • 0 Reads

Semi-Supervised Adaptation for Skeletal Data Based Human Action Recognition

Recent research on human action recognition is largely facilitated by skeletal data, a compact representation composed of key joints of the human body that is efficiently extracted from 3D imaging sensors and that offers the merit of being robust to variations in the environment. However, leveraging the capabilities of artificial intelligence on such sensory input imposes the collection and annotation of a large volume of skeleton data, which is extremely time-consuming, troublesome and prone to subjectivity in practice. In this paper, a trade-off approach is proposed that utilizes the recent contrastive learning technique to surmount the high requirements imposed by traditional machine learning methods on labeled skeletal data while training a capable human action recognition model. Specifically, the approach is designed as a two-phase semi-supervised learning framework. In the first phase, an unsupervised learning model is trained under a contrastive learning fashion to extract high-level human action activity semantic representations from unlabeled skeletal data. The resulting pre-trained model is then fine-tuned on a small number of properly labeled data in the second phase. The overall strategy helps identify rules for using least amounts of labeled data while achieving a human action recognition model compatible with state-of-the-art performance. The framework integrates the popular graph convolutional neural networks into the proposed semi-supervised learning framework and experimentation is conducted on the large-scale human action recognition dataset, NTU-RGBD. The paper provides comprehensive comparisons between experimental results obtained with the semi-supervised learning model and with fully supervised learning models. Relative usage of labeled data is emphasized to demonstrate the potential of the proposed approach.

  • Open access
  • 0 Reads
Enhancing Indoor Position Estimation Accuracy: Integration of IMU, Raw Distance Data, and Extended Kalman Filter with Comparison to Vicon Indoor Positioning System Data

In today's world, indoor positioning systems that facilitate people's navigation inside buildings have become a significant area of research and development. These systems offer effective solutions by combining different technologies to determine users' locations in indoor environments where GPS signals are not accessible. Indoor positioning systems are not only beneficial for easing navigation in places like shopping malls, airports, and hospitals but also draw attention for their potential to optimize evacuation processes during emergencies. The aim of this study is to examine various technologies and algorithms used in indoor positioning systems and evaluate their suitability for different application areas. The fundamental technologies used for indoor positioning include Bluetooth Low Energy (BLE), Wi-Fi, magnetic field detection, ultra-wideband (UWB), and sensor fusion methods. In this study, research has been conducted on raw distance data and different Kalman filters to achieve a more accurate indoor position estimation. For initial position estimation, a trilateration algorithm based on Recursive Least Squares (RLS) has been employed using distance data. Subsequently, the outputs of this trilateration algorithm have been fused with accelerometer and gyroscope data. During this fusion process, both Extended Kalman Filter (EKF) and Cubature Kalman Filter (CKF) algorithms have been utilized. The obtained results have facilitated a comparison between these two algorithms. The data used for algorithm testing has been acquired from real sensor data. Based on the test results, the two algorithms have been compared using Root Mean Square Error (RMSE) and process time metrics.

  • Open access
  • 0 Reads
AI-Driven Estimation of Vessel Sailing Times and Underwater Acoustic Pressure for Optimizing Maritime Logistics

Today, a large number of ships and nautical elements are active at sea. According to UNCTAD, approximately 80% of world trade is transported by sea, and this number is expected to further increase in the coming years. In addition, shipping companies have been reporting over the years that disruptions and deviations from the initial plan occur frequently, resulting in delays. These delays contribute to poor port optimization, disruptions in the market chain, and increased pollution, mainly greenhouse gas emissions and underwater radiate noise, due to prolonged idle times of vessels awaiting port calls. In fact, in April 2018, the IMO adopted the Initial Strategy for the reduction of GHG emissions from shipping which sets key ambitions, including cutting annual greenhouse gas emissions from international shipping by at least half by 2050, compared with their level in 2008. This strategy goes in line with the Zero-Emission waterborne Transport, the Horizon Europe partnership that aims to deliver and demonstrate zero-emission solutions for all major ship types and services before 2030.

In this paper, we present an innovative approach that incorporates artificial intelligence (AI) models, specifically machine learning (ML), and preprocessing techniques, to estimate the sailing time of vessels in port surroundings. All of this is accomplished by leveraging historical vessel data, such as ship characteristics, movement patterns, weather conditions, and port-specific factors (docks and areas of action). Preprocessing is crucial to achieving accurate AI models, enabling them to effectively learn from and leverage the power of data-driven insights to accurately estimate vessel dwell times. In this way, we study the impact of preprocessing on the data for prediction.

Also, by implementing underwater acoustic propagation model to each ship in its route, direct aspects related to the underwater noise pressure in the port context are studied. This study aligns with the MSFD, in particular regarding Descriptor 11, searching a balance between optimizing economical marine activities with good environmental status.

The data used to train the model cover a period of one natural year, from January 2022 to December 2022 in the Port of Cartagena area (Spain). The raw dataset consists of 32 columns and 1,259,616 rows. This dataset was divided into several CSV files (each covering a half-month period), and after concatenation into a single file, a descriptive analysis of the data was performed. From this analysis, interesting conclusions were drawn, such as the random routes taken by "tugboat" type ships, which only navigate when they need to assist another ship in entering the port, or the patterns followed by various types of ships in their routes, such as "support fishing vessel" types. It was also observed that, on average, most routes last around 2 hours, and the speed of ships in this study area averages around 9 knots (relatively high speed). The general source levels of the ships in this study range from 110 to 120 dB re 1µPa.

Various models, such as ANN (Artificial Neural Networks), Gradient Boosting, Random Forest, and linear models, were employed in this study. After several tests and cross-validation methods, the Gradient Boosting model was selected as the best among them, providing a first version of a model with an R2 of 0.82 and an MSE of 0.20.

Our results demonstrate promising accuracy in estimating vessel navigation times in a specific zone, providing valuable information for port operators, shipping companies, and other stakeholders (including bunkering) to optimize port operations, streamline logistics processes, and reduce environmental impacts. Thus, this research represents a significant step towards harnessing the potential of AI, specifically ML, in improving maritime logistics and addressing the challenges of port optimization.

  • Open access
  • 0 Reads
Low-cost environmental monitoring station to acquire health quality factors

In recent years, with the development of the IoT, emphasis has been placed on the construction of IoT devices in conjunction with an appropriate information system for informing citizens in various fields (transportation, trade, etc.). Focusing it on health, there are specific IoT devices for monitoring a patient's health, alternatively IoT devices can be used for public monitoring of an area's environmental data which affect the health quality. In densely populated areas and especially in large cities, environmental pollution, apart from air pollution, is the exposure of citizens to solar radiation (ultraviolet UVA UVB radiation), as well as to noise pollution in an area where people live and work. As it is known, ultraviolet radiation, especially during the summer months, can cause the beginning of skin cancer and various eye diseases, while noise pollution can create mental disorders in humans, especially in young people. At this work a low cost solar radiation and noise pollution monitoring station is presented. The parts that make up the station are a microcontroller (TTGO-OLED32) with an integrated LoRa device and the ultraviolet radiation sensor and sound sensors. In addition, a mini ups device has been installed in case of power failure and a GPS device is utilized for the location point. The measurements are obtained from the sensors every ten minutes and are transmitted via the LoRa network to an application server in which the user has direct access to the environmental data of an area. In conclusion, the data obtained from such IoT devices help in the study of cities to optimize factors in people's lives.

  • Open access
  • 0 Reads
LoRa radius coverage map on urban and rural areas: Case study of Athens northern suburbs and Tinos Island, Greece

With the rapid development of the Internet of Things, the most widespread way of transmitting data from such arrangements is the utilization of a LoRa network. The advantages offered by LoRa, concern both low power consumption and wireless coverage of an area. Although in many works a reference is made regarding the coverage radius of a geographical area, there are differences between urban (cities) and rural (countryside) areas, as in the rural areas there are no dense structures nor radio signal noise inside the operating frequencies spectrum of LoRa. Thus results are expected to be better at rural areas than urban areas. Especially in an urban area, apart from the signal noise caused by other LoRa devices (both commercial or private), the coverage varies according to the floor where a LoRa station is placed in a building, in relation to the height in which the LoRa gateway is placed in another building. In this work, the LoRa radio coverage study is presented in a radius of 2km, both in an urban and in a rural environment using only one LoRa gateway. In particular, the urban environment includes buildings with an average height of 15m, while the geographical surface of the area includes two hills. To better capture the coverage, LoRa stations are placed on all floors of the selected buildings periodically. The results show the difference in coverage between urban and rural areas which is related to radio signal noise. Furthermore, we observe significant changes in the coverage map in urban areas, affected by the installation height (floor) of the LoRa station. By understanding these variations in LoRa network performance for different environments, we can make informed decisions when deploying such networks, optimizing their efficiency and ensuring seamless data transmission in both urban and rural settings.

  • Open access
  • 0 Reads
Automated Damage Detection with Low-Cost X-Ray Radiography using Data-driven Predictor Models and Data Augmentation by Xray Simulation

It is still difficult to identify and detect hidden faults in multi-layered composites like fiber-metal laminates (FML), even using advanced Xray Computer Tomography (CT). For example, an impact damage is nearly invisible by a frontal Xray projection, although, the deformation can be seen and detected manually by hand perception. Things are getting more worse if a portable low-cost Xray radiography or semi-tomography machine is used (called Low-Q measuring device), as introduced and described in this work. The Xray equipment consists of a low-cost Xray source for dental diagnostics and an Xray detector consisting of a conventional medical Xray converter and amplifier foil (Fine 100) backside imaged by a commercially available CMOS monochrome image sensor (back illuminated Sony IMX290 2M pixel sensor) and a simple two-lens optics. The optical distortion introduced by the optics increases with increasing distance from the center of the image ("barrel distortion") and must be corrected, at least for CT 3D volume reconstruction.

The measured Xray images pose increased spatially equally distributed gaussian noise (compared with high-quality flat panel detectors) and more important randomly located "popcorn" shot noise by avalanche effects in pixels and pixel clusters (islands) due to Xray radiation exposure (back-illuminated sensors are very sensitive for this noise). The gaussian noise can be reduced by averaging, the shot noise is removed by using multiple images recorded in series and an automated pixel replacement algorithm. The shot noise is a seed threshold phenomena, i.e., the location and number of white pixels changes from image to image, therefore allowing the replacement of white pixels in one image from unaltered pixels from another image.

After the image preprocessing and filtering stages, damage and material faults are identified by a pixel anomaly detector, basically an advanced Convolutional Neural Network (CNN) and region-proposal R-CNN models. Training of pixel classifiers can use a few images only because each pixel region is a sample instance. R-CNN models require an extended sample data base, which cannot be acquired only by physical measurements. The training and test data set will always limited by a limited number of specimens, e.g., with impact damages, and a limited variance in material and damage parameters (e.g., location). For this reason, the data set is extended by synthetic data augmentation using Xray simulation. In contrast to other wave measurement principles like Guided Ultarsonic Waves (GUW), Xray images can be simulated with high accuracy (compared with physically measured images). We are using the gVirtualXray software library [GVX23;VID21] performing Xray image simulation by using GPU processing only and raytracing. GvirtualXray is proven for its suitability to produce accurate images as long as diffraction and reflection of Xrays are neglected. In addition, a novel few projection hidden-damage detection methodology is introduced that can be used in-field with a portable Xray machine as described above. Preliminary results show a high damage detection rate for a wide range of materials.

## References

[GVX23]: gvirtualxray, https://gvirtualxray.fpvidal.net, accessed on-line on 24.1.2023
[VID21]: F. P. Vidal, Introduction to X-ray simulation on GPU using gVirtualXRay, In Workshop on Image-Based Simulation for Industry 2021 (IBSim-4i 2020), London, UK, October, 2021

  • Open access
  • 0 Reads
Time Series Modeling and Predictive Analytics for Sustainable Environmental Management. A Case Study in el Mar Menor (Spain)
, , ,

In the realm of data science and machine learning, time series analysis plays a crucial role in examining and predicting data that evolves over time. These sequential observations, recorded at regular intervals, hold significant value in comprehending various phenomena, including environmental dynamics. The Mar Menor, situated in the Region of Murcia, presents a particularly urgent case due to its unique ecosystem and the challenges it confronts. This paper addresses the imperative need to investigate the environmental parameters of the Mar Menor and develop accurate forecasting models for informed decision-making and environmental management. These parameters, encompassing water quality, temperature, salinity, nutrient levels, chlorophyil, and more, exhibit intricate temporal patterns influenced by a multitude of factors, including human activities, climate change, and natural processes. By leveraging advanced machine learning techniques, we can reveal valuable insights into their behavior and project future trends, empowering stakeholders to implement effective strategies for conservation and sustainable development.

The approach undertaken in this study encompasses both descriptive and predictive analyses, aiming to identify, on one hand, a methodology for time series analysis that suits each dataset based on its specific characteristics in general, and on the other hand, at a more specific level, to discover the most suitable predictive model for time series forecasting based on the unique characteristics of the Mar Menor dataset. This includes identifying potential trends, seasonality, and temporal dependencies that contribute to the complexity of the environmental parameters. To find the most appropriate predictive model, a series classification is performed using robust time series analysis methods such as correlation analysis or the Dickey-Fuller hypothesis, along with evaluation and comparison techniques like the Akaike (AIC) and Bayesian (BIC) information criteria, which allow finding the model that best fits the series' characteristics.

Several state-of-the-art machine learning algorithms and statistical models, such as autoregressive models (AR), moving average models (MA), or the Facebook Prophet model, as well as recurrent neural networks (RNN), such as long short-term memory (LSTM), are thoroughly investigated to assess their efficiency in capturing the intricate dynamics of the Mar Menor's environmental parameters. The metrics RMSE, MAE, and MAPE will determine how well these models fit the Mar Menor series.

The ecosystem of this fragile lagoon is susceptible to pollution, overfishing, and the impacts of climate change, necessitating a comprehensive understanding of the underlying processes to ensure its preservation. By developing accurate and robust forecasting models, this study aims to facilitate the identification of critical periods, evaluate potential risks, and formulate proactive mitigation strategies that can aid in maintaining the health and resilience of the Mar Menor's ecosystem.

The results demonstrate that both statistical models and artificial intelligence models are entirely valid for handling the time series of the Mar Menor. However, there is high variation in the accuracy of these models depending on the model configuration. In statistical models, SARIMA was the best model for most datasets. For example, for the temperature, chlorophyll, and oxygen datasets, this model achieves an RMSE of 0.33, 0.63, and 0.025, respectively. On the other hand, in linear models, the support vector machine with a linear configuration is the best in two parameters, with a temperature RMSE of 0.37 and a chlorophyll RMSE of 0.82.

  • Open access
  • 0 Reads
AI-Driven Blade Alignment for Aerial Vehicles' Rotary Systems using A* Algorithm and Statistical Heuristic
, , , ,

In the aviation domain, precise alignment of helicopter blades is paramount for ensuring optimal performance and safety during flight operations. Manual methods for blade alignment often demand extensive calculations and experienced technicians, resulting in time-consuming processes. This research proposes an innovative AI-based algorithm, integrating the A* algorithm and a statistical heuristic function, to optimise blade alignment in helicopter rotary systems. The algorithm seeks to minimise the standard deviation of blade distances from the ground, captured using high-speed distance sensors. Firstly, the initial blade positions, along with the swashplate turns limitations, are given to the algorithm. Later, by exploring all potential adjustments and selecting the most promising sequence to minimise the standard deviation of blade distances (considering the allowable pitch limits), the algorithm achieves precise blade alignment, enhancing helicopter performance and safety. Subsequently, the algorithm outputs the recommended sequence of adjustments to be made in the swashplate.

To validate the algorithm's efficacy, we conducted comprehensive case studies using MI 17 helicopters as a testbed. The algorithm was assessed under varying scenarios, such as near-perfect alignment, single-blade misalignment in upward and downward directions, and multiple blades in asymmetric positions. The results demonstrate the algorithm's capability to swiftly recommend the precise sequence of adjustments for each control rod nut, effectively minimising blade misalignment and reducing standard deviation. The implications of this research are far-reaching, promising enhanced helicopter performance and safety across diverse application domains. By automating and streamlining the blade alignment process, the algorithm mitigates the reliance on human expertise and manual calculations, ensuring consistent and accurate blade alignment in real-world scenarios.

  • Open access
  • 0 Reads
Design and development of a low-cost and compact real-time monitoring tool for battery life calculation

Lithium- ion batteries are utilized everywhere from electronic equipment , smartphones and laptops to electric vehicles and power plants due to their high energy density, low weight and self-discharge characteristics. However, they inherit certain disadvantages including high cost, narrow temperature range operation and the need for a prime management system. Increased temperature caused by high current provided can damage the cells causing lithium deposition in the cathode or even vaporization of the electrolyte leading to internal short circuit. In this paper a compact and low-cost battery management system is presented. This system can measure the voltage and current for both the battery and supply via external sensors for both charging and discharging conditions. Two NTC thermistors, 10k and 100K Ohm each, are exploited for collecting battery temperature in two different spots of the socket for direct comparison and validation of the layout accuracy while an additional sensor measures external temperature and humidity. A charging socket is provided for charging the cell through an external source or a small PV panel with dynamic voltage output to test the battery response. Finally, an Arduino compatible device is implemented on one hand to protect the battery from overcharging and collect these values at a 10 second time rate and calculate precious parameters of the battery like state of charge, state of health and state of life, on the other hand to send data of battery over Wi-Fi on the internet application server for real time monitoring, in an efficient, portable and low-cost setup.

Top