Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
A preliminary study on Arterial Stiffness assessment using Photoplethysmographic Sensors

In recent years, statistical studies have highlighted an increase in the incidence of cardiovascular diseases (CVDs). Therefore, detecting and diagnosing these conditions in advance is crucial to ensure appropriate treatment and prevent further complications. Since the elastic properties of arteries change with aging or in the presence of diseases, arterial stiffness is a key indicator of vascular health and a significant risk factor for CVDs. A physical model commonly used is based on the Moens Korteweg equation, which correlates the Pulse Wave Velocity (PWV), the speed at which the pressure wave propagates along a blood vessel, to the Young's Modulus. The PWV, in turn, can be calculated from the Pulse Transit Time (PTT), which is the temporal delay required for a pressure wave to travel a specific distance between two sites. To study the relationship between PWV and arterial stiffness, an experimental in vitro system was created to simulate the cardiovascular apparatus under controlled velocity and pressure conditions. Four different silicone models were used, each with different mechanical properties, simulating blood vessels in terms of geometry and mechanical characteristics. Two photoplethysmographic (PPG) sensors were used for PTT measurements. These are extremely small and low-cost devices that utilize a LED light source and a photodetector to detect volumetric changes. Currently, these sensors are widely used in wearable devices such as smartwatches and smartbands to provide the user with important vital parameters, such as heart rate and blood oxygenation. The pair of PPG sensors was positioned in each phantom models at three specific distances to determine the optimal distance for detecting arterial stiffness. The purpose of this study is to enhance the use of PPG sensors for monitoring the mechanical properties of blood vessels and, thus, to prevent potential cardiovascular pathologies.

  • Open access
  • 0 Reads
Wearable Sensor Based Gait Analysis and Robotic Exoskeleton Control for Parkinson’s Patients

Gait disorders are significant indicators of neurological diseases such as Parkinson’s disease and reduce the quality of life of patients. Although the detection and classification of gait disorders is essential for treatment and diagnosis, there is currently no single standardized gait analysis system. Wearable sensors offer a promising solution, providing accessible gait analysis by capturing periodic movements during walking. In addition to analysis, soft body robotic exoskeletons improve walking by applying controlled robotic forces to correct abnormal gait patterns. However, for optimal therapeutic effects, exoskeletons must be controlled according to the disorder characteristics and real-time feedback.

This study presents the design of a real-time gait analysis system using wearable sensors. This analysis system can be used to both diagnose and control soft body exoskeletons in Parkinson's patients. Wearable sensors consist of three low-cost electromyography (EMG) circuits and four 6-axis inertial measurement units (IMUs), positioned on the primary muscle groups involved in gait. Load cells are placed under the feet to capture dynamic force data. All sensor data is acquired and wirelessly transmitted to the server for signal processing by a central microcontroller.

The data is processed to extract both physiological and kinematic parameters from gait cycles. Using the dataset of gait cycle parameters, a machine learning model facilitates a quantitative assessment of the spectrum of gait disorders. This analysis will generate real-time feedback by evaluating kinematic and physiological parameters. The objective of the feedback is to provide an adaptive control mechanism for therapeutic devices such as soft body exoskeletons suitable for gait disorders. The machine learning model is also used to iteratively improve the control model at each step. In this way, our study will offer low-cost adaptive physiologic control for traditional therapeutic exoskeletons especially for Parkinson's patients.

  • Open access
  • 0 Reads
Lens Distortion Measurement and Correction for Stereovision Multi-Camera System
, , , , ,

In modern autonomous systems, measurement repeatability and precision are crucial for robust decision-making algorithms. Stereovision, which is widely used in safety applications, provides information about the object's shape, orientation, and localisation in a 3D space. The camera's lens distortion is a common factor that introduces measurement errors.
Traditional calibration methods require a test for each camera to determine the possible correction parameters. However, this method is unfeasible for large-scale production due to the time consumption and complexity. In this paper, a general correction model is developed using a statistical approach to minimise the effect of lens distortion across different cameras of the same type: Basler Lenses (C125-0618-5M F1.8 f6mm) assembled with Sony IMX477R matrices.
This is done with the aid of a novel method for lens distortion measurement based on linear regression with photographed vertical and horizontal lines. The measure for the general correction model is 3.0 and 4.5, for horizontal and vertical lines respectively, while individual scores of cameras range from 1.5 to 7.8 for horizontal, and 1.2 to 23.4 for vertical lines. Furthermore, the lens distortion correction model is validated in stereovision applied to bird tracking around wind farms. The correction of synthetically generated bird's flight trajectories can reduce the error of around 15-20\% in disparity and depth estimation in certain regions of the image, e.g. a bird at a distance of 610 meters (5 pixel disparity) is seen by the distorted lens at 520 meters (5.9 pixel disparity). The results ensure that the presented general correction model meets the accuracy requirements of multi-camera applications.

  • Open access
  • 0 Reads
Silver nanoparticles stabilized by various organic coatings for gas sensors: comparative analysis by surface plasmon resonance and quartz crystal microbalance methods

Composite nanostructures stabilized by branched polymers are of undoubted interest for chemical sensors. The combination of a heavy metal with the polymer’s functionality creates smart nanobot, where the nanoparticle’s inertial mass enhances the initial adsorption effect. In this report, we discuss an advanced QCM sensors in which the informative signal is due to a change in the structural organization, the triggering factor for activation of which is the adsorption of the analyte.

Silver nanoparticles with a diameter of 60 nm with coatings based on polymeric PEG, BPEI, PVP and citric acid (CIT) were applied by dropping followed by drying at room temperature. Quartz resonators (10 MHz) and surface plasmon resonance chips (SPR) were used as physical transducers. The response of the SPR and quartz microbalance (QCM) transducers to vapors of water and ethyl alcohol was measured in a carrier gas flow at room temperature.

The SPR spectroscopy demonstrate typical shift to the large angles; the saturation level depends on the type of coating. QCM measurements confirm the possibility of analytes detection: the responses are specific for the analyte-sensitive layer pairs. However, in contrast to the SPR results, the responses to ethanol vapors for PEG and BPEI-based coatings have the opposite sign compared to sensitive layers based on CIT and PVP.

According to the Sauerbrey model, an increase in the adsorbed mass should lead to a decrease in frequency, which is observed for water and ethanol (CIT and PVP) vapors. For PEG and BPEI, anti-Sauerbrey behavior is dominated, in which the adsorption leads to a change in the viscoelastic characteristics of the coatings: heavy metal nanoparticles cease to be rigidly fixed on the surface. Such a mechanism allows the creation of highly selective sensors, the response of which differs not only in magnitude, but also in the sign of the response.

  • Open access
  • 0 Reads
Integrated low-cost wearable electrocardiograph system of primary assessment for cases of rural residents

In recent years, advances in both technology and medicine have made great progress. With the development of technology, low-cost and high-precision sensors have emerged and microcontrollers with high computing power and low power consumption have been created. In densely populated urban areas, health care is provided by a set of hospitals and medical centers where every patient can find an immediate response to his health problem. However, medical care, particularly in non-urban areas, remains limited as many such areas do not have a hospital or medical center nearby. This results in inadequate medical care, both diagnostic and preventive, for the inhabitants of these areas. Today, the development of microprocessors, the high-speed internet and the low-cost sensors that have been developed can enable the creation of autonomous, accessible health monitoring units. In this work, an integrated low-cost, wearable electrocardiograph system is presented. The system is able to operate anywhere there is an active internet connection. The proposed system consists of two parts. Firstly, the wearable ECG which can be located in the patient's home. Secondly, the information system, in which the data is collected and visualized, so that the doctor has immediate access. Health is a precious commodity and the application of technology is imperative, especially for the health care of citizens in remote areas.

  • Open access
  • 0 Reads
Modelling, Analysis and Sensory Metrication towards a Quantitative Understanding of Complexity in Systems for Effective Decision Making

Modeling and metrication of the complexity of service systems has remained an underdeveloped problem space in the literature. Complexity modelling of service systems from a sensory perspective is quite significant for the understanding of their behaviour and effectiveness in their overall management.

In this research, the complexity of a service system premised on a tertiary institution of learning was modelled and quantified. The concept deployed, focused on modelling the trio core blocks of a system viz: the functional elements (FEs), physical elements (PEs) and the intricacy of connectivity (IoC) associated with the flow of signals including data and information in the normal systemic operations. The modelling process of the IoC, focused on the intra and inter-functional dynamics of the system viz: interactions and operations within the system on one hand, and direct operations between the system and other systems on the other hand as holistically captured during the course of this research.

The numerous tasks and activities depicting functional elements, and their corresponding embodiments depicting the physical elements of the system in their diversity and multiplicity, were holistically enumerated prior to carrying out the modeling and metrication process using core theories such as systems thinking, and binary interaction matrices of interacting system elements.

The outcome of this research has contributed to the literature of complex systems modelling and metrication by proffering quantitative solutions to the complexity of a service system via a case study educational problem domain. This research has extended the principles of metrication and sensory perception to the wider social space in a bid towards understanding societal entities and their complex nature from a sensory dimension premised on quantification.

  • Open access
  • 0 Reads
Reviewing Current Trends: Machine Learning for risk assessments of Occupational Exoskeletons

With the increase in musculoskeletal injuries caused by poor posture and excessive physical exertion at work, assistive wearable solutions such as occupational exoskeletons have been developed. These exoskeletons are paired with advanced wearable sensors that monitor physical exertion and provide real-time data on muscle activity and movement trajectory. Exoskeletons help users by improving their posture and enabling a more effective redistribution of the load on the working muscles. In recent years, there has been a significant increase in efforts to create innovative tools and methods that incorporate machine learning (ML) systems and sensor technologies into the risk assessment prediction of exoskeletons. The ML systems process the data that sensors collect to enhance the accuracy of risk assessments. For example, electromyography (EMG) sensors have been used in previous studies of exoskeletons to measure muscle activation levels and muscle strain and fatigue during various manual tasks. The primary objective of this poster is to discuss the effectiveness of existing ML systems, which aid user training in exoskeleton research and predict risk assessments of industrial exoskeletons. The current systems have shown substantial benefits such as accurately predicting risk for a certain muscle group while carrying out a particular action. However, there is a significant limitation in the scope of these ML models because of a lack in experimented data. In order to compensate, many of the ML models extensively used data augmentation which hinders the system’s overall accuracy. Studies have shown that integrating more sophisticated sensors with real-time insights can help reduce the reliance on data augmentation by providing real-world and immediate data. The poster intends to considerably aid future developments by presenting a comprehensive outline of sensor-integrated ML models in risk assessors of occupational exoskeletons.

  • Open access
  • 0 Reads
Exploring Sleep Apnea Risk Factors with Contrast Set Mining: Findings from the Sleep Heart Health Study

Sleep apnea is a common sleep disorder with potentially serious health consequences. Identifying risk factors for sleep apnea is crucial for early detection and effective management. Traditionally, this has been achieved through statistical methods such as Pearson’s and Spearman’s correlation analysis, which examine relationships between individual variables and sleep apnea. However, these methods often miss complex, nonlinear patterns and interactions among multiple factors. In this study, we applied contrast set mining to identify patterns in attribute-value pair combinations (contrast sets) in the Sleep Heart Health Study database that differentiate between groups with varying levels of sleep apnea severity. Our findings reveal that males and individuals aged 57 to 73 exhibit a higher risk of sleep apnea, with a confidence exceeding 75%. Moreover, male patients diagnosed with second-degree obesity, defined as a body mass index (BMI) between 35 and 39.9 kg/m2, show an elevated risk of severe apnea, with a lift of 2.31, support of 0.18, and confidence above 80%. In contrast, female patients with a BMI within the normal range (18.5-25 kg/m2) demonstrate a lower risk of sleep apnea, with a lift of 2.16, support of 0.13, and confidence exceeding 76%. Contrast set mining helps uncover meaningful rules within subgroups that traditional methods, such as Pearson’s or Spearman’s correlation analysis, might overlook. Future research will focus on developing sleep apnea screening models using the contrast set rules identified in this study, specifically tailored for consumer wearable sensors.

  • Open access
  • 0 Reads
Analog Electronics Neural Networks: Analog Computing combined with Digital Data Processing Revisited

Analog ANN as Co-processors: For decades signal processing was performed with analog electronics, including the era of analog computers. In the last decades, most analog circuits were substituted by digital electronic systems. Artificial Neural Networks (ANN) were originally inspired by analog systems, and implemented with analog electronics. Today they are computed by discretized digital computers.

A weighted analog electronics summer circuit requires l+1 resistors and a difference amplifier with about 4-8 transistors (at least 2).. An approximated non-linear transfer function, e.g., the sigmoid function, can be built from at least two transistors, and typically less than 20 transistors if the gradient of the function is computed, too. The tanh function can be implemented with only two diodes. Such small circuits are well suited for printed (organic) electronics replacing more and more silicon electronics, but still limiting circuits to a size of about 100 transistors.

In our work we address the following questions:

1. Can AANN be trained with a digital floating point arithmetic performing gradient-based error optimization and finally be converted into an analog circuit approximation (assuming ideal operational amplifiers)?
2. Can AANN be trained with an in-circuit model approach using a circuit simulator performing gradient-based error optimization, especially assuming transistor-reduced nodes?
3. If no traditional gradient-based error optimization can be applied (due to lack of gradient functions or too high computational times), can genetic algorithms used to a find a solution?
4. Are organic transistors are suitable?

The implementation and approximation error of simple non-linear activation functions using transistor electronics are investigated and discussed. Instead using real analog electronics, we will substitute the circuits by a simulation model using the spice3f simulator. We will consider different model abstraction levels, starting with ideal operational amplifier (voltage controlled voltage sources), then using approximated real OPAMP models, and finally introducing transistor circuits with models of organic transistors.

  • Open access
  • 0 Reads
Electrodeless studies of MXenes in aqueous and polar non-aqueous aprotonic solvent
,

MXenes attract a lot of attention due to their unique properties, in particular, high electrical conductivity. The physical processes occurring during the electrodeless srudies of the specific electrical conductivity σ of MXenes in distillation water and in a polar non-aqueous solvent of N-Methyl-2Pyrrolidone (NMP) at fixed resonant frequencies for five solenoids (f1=160 kHz, f2=270 kHz, f3=1.6 MHz, f4=4.8 MHz, f5=23 MHz) are considered. The oscillating circuit was tuned to resonance by changing the capacitance of the BM-560 Q-factor meter. The Q factor of the oscillating circuit was measured in the range of 100-300 with a maximum relative error of ±5%, and in the range of 30-100 with a maximum relative error of ±3%. The cylinder with the liquid was placed in the middle of the measuring solenoid, in the area of a homogeneous magnetic field. The measurements were performed for four control volumes of the liquids under study (1 ml, 2 ml, 3 ml, 4 ml). The best measurement sensitivity was observed for the maximum volume of the liquid (4 ml). A difference between the experimental dependences of the introduced attenuation d of the oscillating circuit with a cylinder with MXenes in aqueous and non-aqueous polar solvent NMP was observed. The nonlinear dependence of the attenuation of the oscillatory circuit d on the volume of the studied liquids was analysed. The maximum value of the attenuation of the oscillating circuit for the solenoid at the resonant frequency of 160 kHz was observed for the NMP-MXenes measurement, in contrast to the study of MXenes in distillation water have the highest attenuation at a frequency of 1.6 MHz.

Top