Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Peripheral venous simulator development for medical training
, ,

The necessity to develop skills in medical training, from simple procedures such as sutures, venipunctures, and peripheral venous cannulations to complex surgeries, has driven innovation in the fabrication of medical simulators throughout history. These simulators are crafted using materials that mimic the physical and mechanical characteristics of human body parts, providing realistic training experiences. However, the costs associated with developing these simulators pose a significant challenge, especially for low-income areas. This work explores practical options for creating cost-effective and useful simulators by fabricating pieces that represent the forearm, a common site for venipunctures and peripheral venous cannulations. The fabrication process involved combining three types of polymers: PDMS, food-grade silicone, and Artesil shore 20 silicone, along with a Foley catheter to simulate the arm veins. The compatibility of these materials was thoroughly evaluated to produce valid prototypes, ensuring that the stress ratios closely matched the properties of human tissue. Preliminary evaluations of the simulators yielded promising results. They received an acceptability rating of 57.1% for excellent, 28.6% for good, and 14.3% for fair based on user experience. Medical students who tested the simulators found them effective for explaining the behavior of fluids in the body during venoclysis simulations. This feedback highlights the simulators' utility in providing hands-on training and enhancing the understanding of fluid dynamics in medical procedures.

  • Open access
  • 0 Reads
Bioengineered Neural Interfaces: Pioneering Neurotechnologies for Enhanced Brain-Machine Interactions

Abstract:

Introduction:
Bioengineered neural interfaces represent a revolutionary frontier in neuroscience and bioengineering, facilitating seamless communication between the brain and external devices. This research article explores novel advancements in bioengineered neural interfaces, highlighting their transformative potential in enhancing brain-machine interactions and unlocking new capabilities for individuals with neurological disorders.

Methods:
A comprehensive synthesis of recent literature and cutting-edge research findings was conducted, focusing on emerging trends in bioengineered neural interfaces. Key methodologies and technologies, including neural prosthetics, brain-computer interfaces, and neuromodulation techniques, were explored to elucidate their applications in restoring motor function, facilitating communication, and augmenting cognitive abilities.

Results and Discussion:
The findings underscore the transformative impact of bioengineered neural interfaces, with breakthroughs in areas such as neuroprosthetics for limb control, brain-computer interfaces for communication and control, and neuromodulation for treating neurological disorders. From the development of minimally invasive implantable devices to the integration of machine learning algorithms for decoding neural signals, interdisciplinary collaborations are driving unprecedented progress in neural engineering and neurotechnology.

Conclusions:
Bioengineered neural interfaces offer a paradigm shift in neurotechnologies, providing individuals with neurological disorders newfound independence and autonomy. By harnessing the power of neuroengineering and bioinformatics, researchers and practitioners can bridge the gap between the brain and external devices, enabling seamless brain-machine interactions and enhancing quality of life for individuals with neurological impairments. This research article highlights the transformative potential of bioengineered neural interfaces and emphasizes the importance of continued innovation in neuroengineering to address unmet needs in neurological care.

  • Open access
  • 0 Reads
Performance Evaluation of Machine Learning Algorithms for Predicting Flow rate in Pipeline Maintenance Optimization.
, ,

Abstract

Using machine learning to predict maintenance schedules for crude oil pipelines is crucial for enhancing efficiency and minimizing disruptions in the oil and gas sector. Our research explores the effectiveness of machine learning algorithms in this context, with a specific focus on using oil flow rate as a primary predictor. Machine learning models, when trained with a variety of inspection data, can accurately predict flow rate, thus improving maintenance planning. Several pipeline scenarios were analysed, and Python library was used for dataset augmentation. The study shows a correlation between variations in the buildup deposits and flow rate in the pipeline, indicating that the flow rate gives an indication for determining the needs for maintenance. Specifically, higher flow rate aloud longer intervals between maintenance activities like pigging, while lower flow rate could indicate there is accumulation of deposit which necessitating intervention. Ensembled machine learning models was train, variation in performance were observed. Gradient Boosting and XGBoost Regressor show best performers with lower values for MSE, RMSE, and MAE, and higher R² scores compare to the Support Vector Regressor. The result shows Gradient Boosting has MSE of 0.000005, RMSE of 0.002259, MAE of 0.000968, and an R² of 0.997259, follow by XGBoost Regressor with MSE of 0.000005, an RMSE of 0.002269, an MAE of 0.000922, and an R² of 0.997234. while Support Vector Regressor indicate the least performance, with MSE of 0.002868, RMSE of 0.053554, MAE of 0.046311, and an R² of -0.540765. The findings of the study emphasize the necessity of choosing machine learning algorithms that are appropriately suited to the features of the dataset and the task. The findings highlight the importance of selecting machine learning algorithms that are more suitable to the features of the dataset and the task.

  • Open access
  • 0 Reads
Evaluating TPUs and GPUs in a Two-View EfficientNet-based architecture for cancer classification on mammograms: performance and speed analysis

Introduction

Breast cancer is the most prevalent cancer among women worldwide. Mammography is the primary exam used to detect this disease in its early stages. Currently, radiologists interpret these radiological images, but CAD (Computer-Aided Detection and Diagnosis) systems have been developed to assist in this process. While GPUs have traditionally been used for training these systems, newer hardware like TPUs (Tensor Processing Units) has been designed specifically for machine learning tasks, and offers advantages over GPUs that can be explored, such as having more memory.

Methods

This work compared the performance of a two-view mammogram classifier proposed by Daniel Petrini et al. in "Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional Network" and its components (the one-view classifier and patch classifier) on the public dataset CBIS-DDSM (Curated Breast Imaging Subset of the Digital Database for Screening Mammography). The comparison was made using both GPUs and TPUs, leveraging the extra memory and specialized architecture of TPUs.

Results

Training on TPUs was up to 18 times faster than on GPUs, a significant increase in training speed, potentially leading to better models in future work. However, no conclusive evidence showed that using higher resolution images with TPUs improved model performance. Metrics (accuracy and ROC-AUC) were similar at 1152x896 (GPU) and at 2304x1792 (TPU).

Conclusions

Although the classification performance did not improve when increasing exam resolution, the use of TPUs is justifiable due to the increase in training speed, opening up possibilities to train with more data and using more complex architectures, which could lead to better classification results.

  • Open access
  • 0 Reads
Revolutionizing Stock Market Forecasting: A Cutting-Edge Analysis of Machine Learning Models (CNN, ARIMA, LR, GB, and LSTM)

This comprehensive research delves into the burgeoning field of stock market forecasting, emphasizing the use of advanced artificial intelligence (AI) and machine learning (ML) technologies. The primary objective is to develop a robust model capable of predicting short-term stock market movements for major US-listed companies across various sectors. The predictive algorithm relies heavily on historical price data, technical indicators, and sentiment analysis derived from news sources to generate directional forecasts.

This study investigates several critical components of stock market analysis, including pattern recognition, risk assessment, and the use of machine learning algorithms to predict investment returns. A thorough examination of the Efficient Market Hypothesis (EMH) is conducted to understand its implications on forecasting stock prices using historical data. Additionally, the research evaluates a range of approaches and models pertinent to financial prediction. These include the Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model, the AutoRegressive Integrated Moving Average (ARIMA) model, and Long Short-Term Memory (LSTM) networks.

Furthermore, this study addresses the inherent data limitations, the risks of overfitting, and the ethical considerations associated with the application of AI and ML in stock market forecasting. By examining these factors, this research aims to highlight the potential and challenges of employing technology-driven methods in financial markets. The ultimate goal is to enhance the accuracy and reliability of stock market predictions, thereby providing valuable insights for investors and stakeholders. Through this rigorous exploration, this study contributes to the ongoing development of more sophisticated and effective forecasting models in the financial industry.

  • Open access
  • 0 Reads
Digital Semantics for Enterprise Information Systems Development

Introduction: Artificial Intelligence (AI) is the most important paradigm shift of our time. Its purpose is to simulate human intelligence inside a machine, and it’s already affecting many aspects of our life. The core of AI research concerns defining solutions to get, understand, store and elaborate digital inputs in order to return results: solutions to think like a human being. Our contribution is a position paper about our vision and ideas on this topic. Our proposal is additional to current AI approaches: it aims to integrate the path of knowledge and development of AI. Methods: the literature on AI regarding proposals, approaches and models is vast and encompasses an enormous number of application domains. Our research goal is to propose a paradigm to simulate human intelligence within a computer, limited to the Enterprise Information System (EIS) domain, by means of automata and ontologies. Nowadays automata are used in Software Engineering to manage decision-making processes and control the information flow within a software system. The term “ontology” has several meanings, depending on the discipline and domain. In the EIS domain, by "ontology" we mean a set of concepts and relationships that represent a knowledge area. We propose the Digital Semantics (DS), a novel paradigm and definition to our knowledge. DS is the proposed solution to define ontologies, which in turn will have to be implemented through automata. To reach this goal, we answer 3 Research Questions (RQs): (RQ1) Is it possible to define Digital Semantics as a metamodel based on the semantics of natural languages? (RQ2) Is it possible to define ontologies with Digital Semantics? (RQ3) Can automata be the solution to implement ontologies defined with Digital Semantics? Conclusions: the aim of the paper is to answer to the RQs and obtain feedback from the international scientific community.

  • Open access
  • 0 Reads
A New Approach for Improving Sentiment Analysis Using Multi-Dimensional Feature Reduction
, , ,

Sentiment Analysis is a sub-field within Natural Language Processing (NLP), concentrating on the extraction and interpretation of user sentiments or opinions from textual data. Despite significant advancements in the analysis of online content, a continuing challenge persists: the handling of sentiment datasets that are high-dimensional and frequently include substantial amounts of irrelevant or redundant features. Existing methods to address this issue typically rely on dimensionality reduction techniques; however, their effectiveness in removing irrelevant features and managing noisy or redundant data has been inconsistent.

This research seeks to overcome these challenges by introducing an innovative methodology that integrates ensemble Feature Selection techniques based on Information Gain with Feature Hashing. Our proposed approach aims to enhance the conventional feature selection process by synergistically combining these two strategies to more effectively tackle the issues of irrelevant features, noisy classes, and redundant data. The novel integration of Information Gain with Feature Hashing facilitates a more precise and strategic feature selection process, resulting in improved efficiency and effectiveness in sentiment analysis tasks.

Through comprehensive experimentation and evaluation, we demonstrate that our proposed method significantly outperforms baseline approaches and existing techniques across a wide range of scenarios. The results indicate that our method offers substantial advancements in managing high-dimensional sentiment data, thereby contributing to more accurate and reliable sentiment analysis outcomes.

  • Open access
  • 0 Reads
Use of Time or Frequency Domain in ECG Analysis

Machine learning techniques have been widely applied in the medical field, with electrocardiogram (ECG) signals being pivotal for detecting arrhythmias and other applications such as sleep analysis and biometric identity recognition. Traditionally, selecting the right features was essential for achieving good performance in classification. However, with the advent of deep learning, particularly convolutional neural networks (CNNs), the classifier itself extracts and selects the relevant features. This development raises the question of whether it is necessary to represent ECG data in different domains, such as the frequency domain, for optimal performance.

This study aims to evaluate the performance of CNNs in three tasks: classification of rhythm, apnea detection and identity recognition, using three input formats: time sequence, Fourier frequency components and spectrogram. The databases used for this analysis were MIT-BIH Arrhythmia, ECG-ID and Apnea-ECG from PhysioNet. Customized CNN networks were employed for time series and Fast Fourier Transform (FFT) components, while transfer learning with EfficientNet, pre-trained on the ImageNet database, was utilized for spectrograms. The results were validated using N-fold cross-validation, with N being 10 for the arrhythmia and apnea databases, and 4 for the identity database.

The mean accuracy obtained for each task was consistently higher when using the time domain compared to the frequency domain, with differences ranging from 1.4% to 5.4%. Consequently, there is no advantage in transforming time-series data into the frequency domain for these tasks.

  • Open access
  • 0 Reads
FACIAL RECOGNITION OF TRIBAL MARKS USING MACHINE LEARNING

Tribal marking is an African cultural practice which is carried out for the purpose of
identifying a person's tribe or family. In facial recognition systems, tribal marks are considered soft biometrics, which
have been used to improve the
performance of facial recognition systems. Facial mark recognition (FMR) refers to the
ability of a system to determine whether a small skin patch taken from a facial image
contains a facial mark. Although there have been significant improvements in detecting
facial marks using Convolutional Neural Networks, a system integrating African facial marks has
not been implemented yet. In this thesis, we implemented a facial recognition system
for African tribal marks using a one-shot learning model which employed a collected dataset consisting of
images of people with tribal marks. Due to limited sources of data, we adopted Data
Augmentation techniques to increase the size and balance of our dataset. Face detection
and extraction was carried out by a mtcnn model, after which embedding points were
created using a pre-trained (Facenet) model. After the points were created, we employed a
classifier which matched the faces to their appropriate classes based on the training dataset. We evaluated our model using various evaluation metrics, and we obtained an accuracy of 100% for training and 88% for testing for the first experiment and 99% and 83% for the second experiment. We evaluated the model's performance further using F1 score and MCC, and we reported a score of 0.887 and 0.757,
respectively, for the first experiment and 83% and 0.733 for the second experiment. This
study can prove useful in areas such as twin identification, profiling, forensic analysis,
etc.

  • Open access
  • 0 Reads
Using a virtual assistant for interactive student engagement at universities

This project introduces the development and implementation of a virtual assistant tailored specifically for student interaction in a higher education institution. With the increasing demand for personalized support and efficient communication channels in educational settings, virtual assistants have emerged as a promising technology to enhance student engagement, support academic success, and streamline administrative processes. The virtual assistant presented in this project leverages natural language processing, machine learning, and artificial intelligence techniques to provide an intuitive and interactive platform for students to seek information, access resources, and receive personalized assistance. Some functionalities of the virtual assistant include the following: Information Retrieval: the virtual assistant serves as a knowledge repository, allowing students to obtain quick and accurate answers to frequently asked questions regarding academic programs, course offerings, campus facilities, and administrative procedures; Personalized Guidance: by analyzing student profiles and historical data, the virtual assistant offers tailored guidance on academic planning, course selection, and career pathways, ensuring that students receive relevant and timely advice based on their individual needs and goals; Task Automation: the virtual assistant automates routine administrative tasks, such as registration, scheduling, and grade tracking, reducing administrative burden and allowing students to focus more on their learning experience; and Interactive Support: through natural language processing capabilities, the virtual assistant engages in dynamic conversations, allowing students to ask questions, seek clarifications, and receive real-time feedback on their academic progress. Throughout the project's development, a user-centered approach is followed, with continuous feedback from students, faculty, and administrators. Rigorous testing and quality assurance measures are implemented to ensure the accuracy, reliability, and usability of the virtual assistant.

Top