Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 0 Reads
Multimodal Immersive Environment for Evaluating Reaction Time and Decision-Making in Sport Situations: A Pilot Study on Time Analysis and Learning Effects

Introduction: The assessment of motor performance in athletes is a multifaceted endeavor, with methods varying across sports, each requiring assessment techniques tailored to its specific characteristics. Sports science research aims to establish reliable methods that depend on the complexity of the instruments. Athletes have been studied in controlled laboratory settings, immersive virtual reality, and real-world scenarios. However, there are inherent limitations in translating findings from controlled laboratory environments to practical sports applications. To bridge this gap, this study introduces a mixed modality, combining virtual immersion with physical objects. We present a quantitative assessment based on time analysis.

Methods: This pilot study evaluated male athletes performing motor reaction tasks, moving their hands in response to visual stimuli under a go/no-go protocol. The same protocol was carried out in a quiet laboratory environment (QLE) and a multimodal immersion environment (MIE). The QLE lacked noise or visual distractions other than the task stimuli. The MIE included a 180°-enveloping screen with sport-specific video and audio noise and the same task stimuli as the QLE. Motion capture and EMG data from the upper limbs were collected. Reaction time (RT) and decision-making time (DM) were used to compare the athletes' performance.

Results and Discussion: The two-way ANOVA showed no significant differences for RT and DM between left- and right-hand performance across environments, consistent with previous findings. The RT in the MIE was significantly shorter, ~10% (p<0.001, ES=0.52), than in the QLE. The DM showed no differences between environments. While the absence of changes in DM suggests consistent processing speed in both environments, the improved RT performance in the MIE indicates a potential advantage for using combined scenarios in future assessments.

Conclusion: A multimodal immersion environment could allow the introduction of specific stimuli for physical training and assessment, thereby improving the transferability of findings to real-world sports situations.

  • Open access
  • 0 Reads
Are 24 bits too high of a resolution for wearable sEMG devices? What open datasets say

The surface electromyography (sEMG) signal is used in the medical field for treating various diseases related to muscular conditions, as well as in other applications such as video games, gesture detection for smart devices, motion pattern recognition, and monitoring muscle activity in athletes. The proper acquisition, processing, and handling of this signal are important for data reliability. There are specialized devices that digitize the sEMG signal, but since there is no established standard resolution, the resolution varies from one device to another. Currently, semiconductor companies are marketing remarkable 24-bit data acquisition chips. This higher resolution should provide better diagnostics but also demand memory storage and bandwidth resources, which are limited for a wearable device to be practical and realizable. At any rate, it is important to ensure the accuracy of the data for applications requiring the use of the sEMG signal. This article delves into the real resolution used to develop sEMG-based applications by first investigating the open access sEMG database accuracy and comparing it with the claimed resolution. A methodology is proposed for resolution evaluation. Additionally, hand gesture evaluation was conducted using classification algorithms attempting to ascertain the suitability of using 24-bit versus a lower resolution performance. And finally, an investigation of the wireless transmission required for eight high-resolution sEMG channels is presented. Preliminary results of hand gesture evaluation demonstrated a better classification when using 24-bit resolution but only for an accuracy improvement of 0.44% to 1.6% over the 16-bit data resolution. Some of the conditions under which this high resolution may be relevant are identified.

  • Open access
  • 0 Reads
Novel glutathione-responsive polyrotaxanes with enhanced cellular uptake.

Polyrotaxanes are supramolecular assemblies composed of polymers, threaded macrocycles, and bulky stopper molecules to inhibit the decomposition of the first two. Cyclodextrin (CD)-based polyrotaxanes are promising therapeutic agents for lysosomal storage disorders, as they can transport CDs into the cells to normalize the intracellular trafficking of lipids, such as glycolipids or even cholesterol. Even though the important application of these supramolecular polymeric assemblies, their application is highly hindered by the costly, multistep synthesis methods, as well as the low aqueous solubilities of the non-modified polyrotaxanes. We have developed a simple one-pot synthesis of CD-based polyrotaxanes with poly(ethylene glycol) (PEG) and poly(e-caprolactone) (PCL) axes using glutathione-sensitive disulfide-connected stopper molecules. This new synthetic pathway provides high threading efficacy and, due to the versatility of threaded CDs, also tunable water solubilities. In the case of PCl polyrotaxanes, high biodegradability was also detected, using lipase as a model enzyme. Cellular uptake studies on the Caco2 cell line showed up to 52-fold enhanced cellular internalization of these supramolecular assemblies compared to free CDs. In addition, the glutathion-triggered reductive removal of the stopper molecules showed the potential decomposition of these polyrotaxanes in target cells. Based on these results, the synthesized polyrotaxanes with disulfide stopper molecules might be promising supramolecular excipients for cellular delivery of a-CD and its derivatives.

  • Open access
  • 0 Reads
EFFECTS OF ALLOXAN MONOHYDRATE-INDUCED COGNITIVE DECLINE ASSOCIATED WITH DIABETES IN WISTAR RATS

Introduction: Untreated chronic diabetes can lead to the formation of amyloid plaques in the brain, reflecting signs of Alzheimer's disease. Effective treatment of chronic diabetes does not fully resolve cognitive problems.
Objective: To assess the cognitive decline associated with alloxan monohydrate-induced diabetes in Wistar rats.
Material and methods: This study involved 24 Wistar rats weighing between 150 and 300 g. They were divided into three groups: (1) normal rats, (2) untreated diabetic rats and (3) diabetic rats treated with D-erythrodihydrosphingosine. The rats were given glucose and food overnight to prevent hypoglycaemia. Behavioural experiments were carried out using object recognition and radial arm maze tests.
Results: The results showed a significant difference between the groups in terms of weight, with a significant decrease observed from day 7. Behavioural analysis revealed significant differences in the time spent exploring familiar and novel objects in the rats. The study also assessed the cognitive impact of diabetes by evaluating spatial learning in an eight-branch radial maze. The results showed that diabetes affects working memory and that SPK1 and 2 inhibitors do not improve it in diabetic rats.
Conclusion: Working memory is impaired and spatial learning is difficult. Nevertheless, our results would help to understand the link between cognitive decline and hyperglycaemia and highlight the importance of comprehensive management of diabetic patients with neurocognitive problems.

  • Open access
  • 0 Reads
Optimizing Ensemble Performance with Condorcet Voting: A Study on Weak Learners for Image Classification
,

Introduction:

Convolutional neural networks (CNNs) are a primary tool for image classification. This study proposes a novel approach to enhance ensemble learning by modifying the voting rule for aggregating results from individual classifiers. Typically, simple and weighted majority rules are used, but recent literature suggests other rules may be more efficient for specific tasks, particularly in multi-classification. This project tests the Condorcet voting rule for image classification.

Methods:

Ensemble learning combines predictions from multiple classifiers into a single result based on voting. Traditional voting rules often limit the potential of these ensembles, especially with weak learners whose accuracy is below 50% in multi-classification tasks. By exploring the Condorcet voting rule, this study aims to improve accuracy without domain-specific knowledge, efficiently balancing the classifiers' achievements and weights. This approach may benefit weak learners, which consume less energy and achieve peak performance faster than traditional methods.

Classical networks such as VGG, ResNet, EfficientNet, and other CNNs were employed. Condorcet rule was compared against traditional simple and weighted majority rules. The CIFAR-100 dataset was used for a balanced and comprehensive evaluation of the models' performance. The models were limited to 35 layers, with average accuracy across individual models being no more than 28% (random guessing yields 1% accuracy in a 100-class setup).

Results:

The ensemble model using the Condorcet rule showed a minimum of 4% accuracy improvement compared to simple and weighted majority rules, and over 15% improvement relative to the average accuracy of individual models.

Conclusions:

This study suggests that alternative voting rules, such as the Condorcet, can improve the performance of ensemble in image classification without domain-specific knowledge, and without altering the energy spent or training time of individual classifiers. Further study on even weaker learners to optimize the balance between energy consumption and accuracy is promising for the field.

  • Open access
  • 0 Reads
Preliminary assessment of temperature as a relevant factor in the pyrolysis process, for the valorization of corn cob waste (Zea mays).

The implementation of Thermogravimetric Analysis (TGA) laboratory tests allowed us to determine the thermal stability and degradation of corn cob waste (Zea mays); it was observed that corn cob waste should not be exposed to a temperature higher than 240 °C because when exceeding this temperature, a mass loss of around 50% is observed. At this temperature, we removed traces of moisture and some low-molecular-weight aromatic molecules after the drying stage.

Subsequently, the corn cob residues were degraded by pyrolysis in a nitrogen (N2) atmosphere at different temperatures (500, 550, and 600 °C). Tests were performed on each solid product obtained to determine its stability.

Finally, FTIR tests were applied to decode the signals and generate spectra that allowed the identification and quantification of the materials present in the samples. The results of comparing the tests before and after the pyrolysis of the corn cob residues show the conservation of the CO, CH, C=C, C=O, CO and CH bonds, which are common in ethers and aromatic compounds that include hydroxyl groups.

In conclusion, corncob pyrolysis represents a promising technology for the valorization of agricultural residues and the production of sustainable bioproducts. Through this process, it is possible to contribute to the transition towards a circular economy and to the mitigation of climate change.

  • Open access
  • 0 Reads
Expression of Tryptophanyl-tRNA Synthetase (WARS) and Indoleamine 2,3-Dioxygenase-1 (IDO1) in the Prediction of Bladder Cancer Staging

Introduction

Bladder cancer (BC) is one of the most common neoplasms in the world. Like other types, BC can produce the enzyme indoleamine 2,3-dioxygenase-1 (IDO1), which, by modulating the immune system, protects the tumor and favors its progression. When present, IDO1 degrades tryptophan in the microenvironment, producing kynurenine catabolites. This situation blocks the local immune response but does not affect the IDO1-producing cell itself. The mechanisms for this resistance remain unknown, but there is evidence that reserves are generated by the enzyme tryptophanyl-tRNA synthetase (WARS), which loads tRNA with the amino acid, ensuring protein synthesis and the cell cycle.

The objective of this study was to verify if the expression of IDO1 and WARS is associated with BC staging.

Materials and Methods

The study included BC specimens extracted from 165 patients, 88 with non-muscle invasive BC (NMIBC) and 77 with muscle invasive BC (MIBC). The project was approved by the ethics committee (49446515.0.0000.5511). The expression of IDO1 and WARS was evaluated by immunohistochemistry in both neoplastic and inflammatory cells. Correlation analysis (Spearman) and ROC curve were used.

Preliminary Results

Although no correlation was detected between IDO1 and WARS, the expression of both proteins was effective in predicting NMIBC.

Preliminary Conclusion

It is possible that the enzymes IDO1 and WARS play a role in the pathophysiology of BC and have predictive power in disease staging. This study is ongoing for further clarification.

Funding

FAPESP 2022/15575-8

  • Open access
  • 0 Reads
The Use of Software in Dental Practice Management: Predictable Tools to Improve Economic Performance.

Introduction: Today, through management programs, dentists have data at their disposal that can be useful for measuring and planning strategies to better manage their practice. This study aims to analyze the digital benefits and possible outcomes of using new IT solutions. Materials and Methods: First and foremost, it must not be taken for granted that all dental practices have a management program and, above all, that they correctly enter the information intended to allow for the subsequent monitoring of their center's performance. Constant monitoring is to be carried out under the logic of analyzing the two macro-areas of management control, namely effectiveness and efficiency. Results: Today, the amount of information available to dental practices is truly substantial, but it is only truly usable if it is correctly entered into a management system or other electronic support: the quality of the data is fundamental since imprecise, partial, inconstant, or uncoded data do not allow for subsequent processing and render the great compilation efforts made by the entire team useless. The first step along the path of computerizing a dental practice is to determine the objectives to be achieved. Conclusion: It is fundamental to avoid criticalities that are rarely attributable to the software itself, but to the process of change that must be implemented within the practice. Computerization is a radical change and involves an investment in terms of time and commitment on the part of the staff involved.

  • Open access
  • 0 Reads
Emulation of manufacturing equipment through the dynamic generation of graphic and interactive environments

The work presented in this paper is part of an Industry 4.0—Factories of the Future project, which explores the use of intelligent multi-agent architectures to control flexible manufacturing units. The concept of flexibility results from the ability to adapt manufacturing processes in an agile way to context variations, whether these result from changes in the references under production, the priority and quantity of orders, limitations to or unavailability of equipment, or a lack of components. Studying control solutions for such diverse environments and with such different contexts is a complex task, particularly with regard to validating and evaluating the control solutions. Mass application in a real context is impractical, at least in the initial stages, so the only option is to resort to simulation, which is perfectly suitable for scientific validation but not flexible and practical enough to be used as proofs of a more practical nature or for demonstrations. It is in this context that the work presented here was developed. The result is a framework that is used to generate virtual manufacturing equipment. Assuming that the behavior of the equipment is described according to a code routine, this framework allows for annotating the state, input, and output variables, and based on this, it generates a virtual version of the equipment with a graphical and interactive interface that can be interconnected with other virtual and real equipment. In this way, it is possible to replicate a manufacturing unit with real and virtual equipment and test the practical behavior of the control solutions, allowing us to interact with the equipment (for example, turning off the equipment, injecting faults, conditioning the production rate, etc.) and, thus, pragmatically and visually assess the behavior of the entire solution.

  • Open access
  • 0 Reads
An Effective Heart Disease Prediction Model Using Hybrid Machine Learning

Heart disease is becoming one of the critical diseases day by day in the current global scenario. Clinical data analysis faces huge challenges in heart disease prediction due to the increased number of cases and common symptoms across multiple diseases. Thus, this work attempts to improve the early detection of heart failure to save lives. It employs machine learning algorithms, including logistic regression, Decision Tree, Random Forest, K-nearest neighbors Algorithms, Support Vector Machine, Stochastic Gradient Decent, Multi-layer perceptron (MLP), XGBoost, Ada Boosting, Extra Tree, Gaussian Naïve Bayes, and Gradient Boosting Algorithm (GBA), to compare their performance to achieve this task. Further, this paper proposes an enhancement to the proposed hybrid Multi-layer perceptron (MLP) model with the Gradient Boosting Algorithm (GBA) by developing a novel feature set that achieves the highest possible accuracy scores. All methods have been successfully validated using the cross-validation method. The efficacy of the proposed model was evaluated by using evaluation metrics such as accuracy, precision, recall, and F1 score. The hybrid proposed model predicts early heart disease with a 98% accuracy rate, according to the results, demonstrating extraordinary accuracy. This grouping combination leads to enhanced accuracy, robust feature selection, better treatment of high-dimensional and unnecessary data, and improved simplification and interpretability. This proposed work has important scientific value in the medical field for improving cardiovascular risk assessment.

Top