Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 123 Reads
DIGITAL CHALLENGES OVER TEACHING IN A PANDEMIC ERA: a study case in the Atacama University
, , , , ,

In the context of e-learning during the COVID-19 pandemic, medical education has suffered a significant impact on its training process due to various factors, such as the suspension of access to university and/or clinical fields, therefore, it is essential to know the perception of medical students about teaching methods during the pandemic period and its impact on their careers. In this way, this study aimed to investigate, in medical students from the University of Atacama (UDA), Chile, about their perception and factor related to the digital teaching methods adopted by UDA-Faculty of Medicine courses. As a preliminary study, we studied a survey through a sample of 51 students and containing 32 questions that involved 3 theoretical dimensions (sociodemographic background, information preferences, and learning perception styles). We aimed to investigate elements, using a causal inference method (Bayesian Network), that unravels the relationship across these theoretical dimensions that correspond to meaningful learning. Based on these analyzes, we were able to find the information process flow related to 20 questions towards the significant learning, and characteristics associated with the learning process. This work sought to develop a better understanding of alternative active methodologies and their effectiveness for the medical career.

  • Open access
  • 87 Reads
The limitations of human information processing and their implications for parsimonious computational modelling and reliable Artificial Intelligence

Information theory is concerned with the study of transmission, processing, extraction, and utilization of information. In its most abstract form, information is conceived as a means of resolving uncertainty. Shannon and Weaver (1949) were among the first to develop a conceptual framework for information theory. One of the key assumptions of the model is that uncertainty increases linearly with the amount of complexity (in bit units) of information transmitted or generated (C.E. Shannon, W. Weaver. The mathematical Theory of communication, University of Illinois, Urbana III, 1949). A whole body of data from the cognitive neurosciences has shown since that the time of human response or action increases in a similar fashion as a function of information complexity in various different situations and contexts. In this paper, I will discuss what is currently known about the limitations of human information processing. The implications for the development of parsimonious computational models in science and the idea of reliable Artificial Intelligence for science and society will be made clear under the light of arguments from the cognitive neurosciences and computational philosophy. The goal of the presentation is to carve out a conceptual framework that is to inspire future studies on the problems identified.

  • Open access
  • 62 Reads
A DIGITAL INFORMATION BEHAVIOUR MODEL

Previous Information Behaviour (IB) models were originally designed for the traditional environment and are not fit to explain human IB in the digital era. Also, the continuous and elastic revolutions of ICTs have questioned previous IB models, leaving a continuous need to review and rework previous models thereby subjecting them to scrutiny in the changing globalization and digital environment. Therefore, there is need to provide an up-to-date IB model. This study presents a digital information behaviour (DIB) model that is relevant in the changing digital environment. It adopts a correlational survey design; and a multi-stage technique was used to select 400 respondents but 233 questionnaires were retrieved from the field giving a total of approximately 58% retrieval rate. Questionnaires were used to obtain information and the psychometric property was presented. The descriptive (frequency and percentage) and inferential (ANOVA and Regression Analysis) statistics was adopted to analyse information obtained. This study revealed that there are significant relationships among IB components such as information needs, search and use but none exists between users’ information use and archival/disposal. In addition, ICT literacy of users does not give a significant impetus to users’ IB, which could affect the output quality of IB. Furthermore, among the four cognitive abilities, only verbal comprehension influenced users’ IB. The study recommends the need to revisit curriculums in all fields of study, to expose students to the necessary ICT literacy levels and cognitive abilities necessary for enhancing IB quality in the digital environment.

  • Open access
  • 107 Reads
Information-theoretic Underpinnings of the Effort-to-Compress Complexity Measure

Effort-to-Compress (ETC) is a measure of complexity based on a lossless data-compression algorithm that has been used extensively in characterization and analysis of time-series. ETC has been shown to give good performance for short and noisy time series data and has found applications in the study of cardiovascular dynamics, cognitive research and regulating the feedback of musical instruments. It has also been used to develop causal inference methods for time series data. In this work, a theoretical analysis helps us to demonstrate the links of ETC measure to the total self-information contained in the joint occurrence of most dominant (shortest) patterns occurring at different scales (of time) in a time-series. This formulation helps us to visualize ETC as a dimension like quantity that computes the effective dimension at which patterns in a time-series (translated to a symbolic sequence) appear. We also show that the algorithm that computes ETC can be used as a means for an analysis akin to ‘multifractal analysis’ using which the power contained in patterns appearing at different scales of the sequence/ series can be estimated. Multifractal analysis has been used widely in analysis of biomedical signals, financial and geophysical data. Our work provides a theoretical understanding of the ETC complexity measure that links it to information theory and opens up more avenues for its meaningful usage and application.

  • Open access
  • 82 Reads
Natural Information Processes, Cognition, and Intelligence

In this talk I will present a framework of natural cognition and intelligence based on info-computation in living agents. The underlying assumption is that cognition in nature is a manifestation of biological processes, that subsume chemical and physical processes (Maturana and Varela 1992; Stewart 1996; Dodig-Crnkovic 2007; Lyon 2005; Lyon and Kuchling 2021), from single cells to humans.

Cognitive science with roots in psychology and philosophy of mind, historically focused on the human as cognizing agent. Recently (Piccinini 2020) presented cognition as result of neurocomputation in organisms with nervous systems. Piccinini goes a step beyond anthropocentric understanding of cognition, but he retains neurocentrism. However, “cognitive operations we usually ascribe to brains—sensing, information processing, memory, valence, decision making, learning, anticipation, problem solving, generalization and goal directedness—are all observed in living forms that don’t have brains or even neurons.(Levin et al. 2021). Thus, we generalize cognition a step further, to include all living forms, not only those with nervous systems.

I will argue that new insights about cognition and its evolution and development in nature (Walker, Davies, and Ellis 2017) (Dodig-Crnkovic 2017), from cellular to human cognition (Manicka and Levin 2019; Levin et al. 2021; Lyon et al. 2021; Stewart 1996; Dodig-Crnkovic 2014) can be modelled as natural information processing - natural computation – morphological computation.

In the info-computational approach, evolution in the sense of extended evolutionary synthesis (Laland et al. 2015; Ginsburg and Jablonka 2019; Jablonka and Lamb 2014) is a result of interactions between natural agents, cells and their groups.

  • Open access
  • 68 Reads
Transformers architecture application in high-quality business names generation
, , ,

The continuous improvement of artificial intelligence/machine learning is leading to an increasing search for the wider application of these technological solutions not only to structured data but also to unstructured ones. In order to apply data science to language processing, an area has emerged - natural language processing (NLP). Natural language processing is the computer analysis and processing of natural language (which can be spoken and written) using a variety of technologies aimed at adapting human language to various tasks or computer programs using linguistic methods.

At present, natural language processing is finding more and more different ways to adapt to real practical problems. These tasks can range from searching for meaningful information in unstructured data (Pande and Merchant, 2018), analyzing sentiments (Yang et al., 2020; Dang et al., 2020; Mishev et al., 2020), and translating the text into another language ( Xia et al., 2019; Gheini et al., 2021) to fully automated human-level text creation (Wolf et al., 2019; Topal et al., 2021). The data set for this study consists of 350,928 observations/business names (299,964 observations in the training sample and 50,964 observations in the test sample). These data were collected using the websites of start-ups from around the world. The aim of this study is to apply natural language modeling models of transformer architecture to generate high-quality business names.

  • Open access
  • 66 Reads
Use of machine learning techniques in chronic obstructive pulmonary disease: A case study in Baja California, Mexico
, , , ,

Chronic Obstructive Pulmonary Disease (COPD) is a chronic inflammatory disease of the lungs that obstructs airflow from the lungs. Symptoms include difficulty breathing, coughing, mucus production, and wheezing. The study was conducted with 769 patients. A total of 48.70% were women, and 51.29% were men. The average age of the patients enrolled was 60 years. The research included 67 variables considering the medical history and biochemical data. The objective is to evaluate chronic obstructive disease. We used automatic learning techniques to assess and identify the patient's determinant variables. The following classifiers were used: Vector Support Machines (SVM), K-Nearest Neighbors (kNN), Decision Tree, Random Forest, Neural Network, AdaBoost, and Logistic Regression. The model suggests that the determining variables for COPD in treated patients are the following: TA_dist, Cholesterol level, LDL levels, Dyslipidemia, Bradycardia, Venous Insufficiency, Systolic Dysfunction, Cardiac Arrhythmia, Vasomotor Headache, Smoking, and Esophageal Achalasia. Therefore, they are considered relevant in the decision-making process for choosing treatment or prevention. The analysis of the relationship between the presence of the variables and the classifiers used to measure COPD revealed that the Logistic Regression classifier, with the variables TA_dist, Cholesterol level, LDL levels, Dyslipidemia Bradycardia, Venous Insufficiency, Systolic Dysfunction, Cardiac Arrhythmia, Vasomotor Headache, Smoking, and Esophageal Achalasia, showed an accuracy of 0.90, precision 0.87 and an F1 score of 0.89. Therefore, we can conclude that the Logistic Regression classifier gives the best results for evaluating the determining variables for COPD assessment.

  • Open access
  • 19 Reads
Volumetric Entropy Associated with Short Light Pulses Propagation in Gaseous Media

One of the fundamental interests in laser pulse propagation through atomic or molecular media relies on the control of propagation in terms of the area of the electric field’s envelope over time. This is meaningful as the initial phases of the atomic dipoles are ignored, and the pulses are kept at resonance with their respective optical transitions. Otherwise, the integrated pulse envelope over time becomes complex. Therefore, the real-valued area is not defined properly. Alternatively, in this short communication, we seek local stabilization of the propagated pulses in terms of volumetric entropy of the Bloch ball. The Bloch ball is formed by an irreducible tonsorial set of the density matrix in the Liouville-space subjected to a trace-metric constraint. The volumetric entropy shows up stabilization for off-resonant propagation. The volumetric entropy displays a space-dependent dip close to resonance. This study proposes supremum and infimum identifiers to quantify the divergence of the upper and lower limits of the integral over the complex envelope of the pulse. The adopted volumetric entropy overcame the implementation of the complex-valued information measure on entropy. We examine our proposed technique through short pulses propagation in duplicated two-level atom media. The extinction of formalism to multilevel atoms and off-resonant polychromatic field excitations are possible through n-dimensional Bloch ball consideration. Moreover, volumetric entropy may quantify propagation with frequency chirping and accounts for shifts due to nearby transitions. In fact, entropy has become an essential mathematical measure in studying information processing and its quantification in communicating atomic channels and networks. In typical light storage and light retrieving experiments, the information written or readout might be complex. The efficiency of restoring is defined through energy demands. Our novel treatment taking into account supremum and infimum of the complex pulse integral leads to cross-modulation effects. Both limits show different stability regions across propagation. Therefore, we provide volumetric entropy as a measure of complex information stability.

  • Open access
  • 69 Reads
Optimal tuning of natural frequency to mitigate multipath propagation interference in multiuser chaotic communication

In this work, we study a wireless communication system designed to handle multi users, each communicating with their own frequency bands. We analyse the performance of the system - its information capacity – while considering different number of users, and different configurations for the multipath propagation scenarios for each user and their base frequency. Our interest is to discover worst- (blind spots) and best-case scenarios (maximal information transmission). Our results show that a prior effective choice of parameters, such as the natural frequency of the chaotic signal, lead to significant throughput improvement and, in some cases, even enable an error-free communication. It is often to be expected that physical constrains in the channel may block higher-frequency signals used in standard non-chaotic communication systems. For our chaos-based communication system, we show that a significant wireless channel constraint, the multipath propagation, becomes less disruptive, the higher the frequency of the user. That is a win-win result. Not only does the higher frequency allow more information to be transmitted, but it also prevents multipath interference. Moreover, there is an ideal relationship between the time of propagation for the signals with the user natural frequency that results in a communication with no interference due to multipath, thus effectively creating a wireless communication system with similar gains as that obtained for wired communication where multipath propagation is not a significant issue. As an application of our system, we study a sensor network configuration where sensors have the same frequency and show that given a particular set of time propagation for the signal to leave the sensor and to arrive in the receiving station, we can determine an optimal natural frequency for the sensors to maximise information transmission. Alternatively, the contrary is also true, and that is that optimal natural frequencies can be deduced, given the time propagation configurations. Some optimal spatial configurations with no interference due to multipath will be deduced from this analysis.

  • Open access
  • 69 Reads
Open Government Data Adoption and Implementation Research: A Meta-Analysis

Open Government Data (OGD) is catalyzed with advances in electronic government, digital technologies, and citizen’s sense of ownership over the last decade. Earlier OGD studies has applied different theoretical bases to explore organization’s behavior to adopt or implement OGD but has also produced uneven results. This study combines various theoretical bases and conducts weight and meta-analysis in OGD research. Further, an interconnected model of existing factors impacting OGD adoption or implementation is portrayed by synthesizing empirical results in OGD studies. Inconsistencies in the level of effect and significance among several factors within empirical OGD studies at organizational level (represented by decision-makers) provide the basis for conducting a weight and meta-analysis. Therefore, the main purpose of this study is to carry out weight and meta-analysis of all factors for confirming their overall influence on adoption or implementation of OGD. The results of this study suggest that organizational capacity, external pressures, and organizational readiness are found to be significant factors of adoption behavior. While implementation intention, organizational arrangement, technical capacity, and organizational awareness are the significant factors of implementation behavior of OGD. The recommendations drawn from this research would help to decide if and when to use such antecedents for predicting adoption and implementation of OGD.

1 2
Top