Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 152 Reads
Entropy, Statistical Thermodynamics and Stochastic Processes

The undeniable appeal of statistical mechanics has led to numerous attempts to extend its tools to processes and problems outside the realm of molecules and physical particles. However, no formal theory exists to guide us on the application of statistical mechanics outside physics and chemistry. In this talk I will show that the basic elements of statistical mechanics are universal to generic stochastic processes.

The theory views a stochastic process as a network of chemical reactions. We begin with a finite sample of the the event space at time zero and construct all possible future paths based on the transformations that are possible under the rules of the stochastic process. We define the ensemble of states that can be reached in a fixed number of steps from the initial state (feasible space), define its probability and formulate its master equation. We show that when the size of the initial sample increases indefinitely (asymptotic limit), the feasible space becomes continuous but its probability distribution converges to discrete points that represent thermodynamic phases. If only one phase is present the ensemble is represented by its most probable distribution. We work out the calculus of the most probable distribution in the asymptotic limit, identify the functional whose maximization produces that distribution and express the most probable distribution in terms of a partition function and its derivatives. We analyze four problems under this theory: (a) random walk, (b) binary clustering (c) binary fragmentation and (d) equilibrium exchange, and give examples of phase splitting in these systems.

  • Open access
  • 30 Reads
Walking down over the spatiotemporal scales in a particular nonequilibrium-thermodynamics dissipative phenomenon called friction

Adam Gadomski, Institute of Mathematics and Physics, UTP University of Science and Technology, Bydgoszcz, Poland

It is intriguing to fully comprehend whether the simple, macroscopic scale expressing Coulomb-Amontons law, describing the static friction effect, and referring to a ratio of friction force vs. the corresponding load, preserves when looking into more fine-grained surface (or, interface) dimensional scales [1]. This question is of utmost interest when attempting to comprehend the complex friction and lubrication phenomenon, expressing its relevance in bioinspired issues, pertaining to biomimetic solutions, represented by the natural articulating devices, such as articular cartilage, examined carefully in (sub)mesoscopic scales [2-4]. In what follows, a particular nonequilibrium-thermodynamics, dissipation addressing framework has been offered to unravel explicitly the spatiotemporal, and implicitly, force-field scales. It is based upon a dissipative autonomous ordinary-differential system, equipped with fractal-like kinetics, and fully immersed in the mesoscopic scale [4]. Its nanoscopic viz microscopic extension can be introduced either by means of certain anomalous diffusion vs. (mechanical) relaxation parametric sets, or when employing thoroughly molecular dynamics computer simulations [3]. A survey of adequately formulated experimental and computer simulation based long-perspective arguments has recently been proposed in [5]. The main idea of revealling the scale peculiarities touches upon certain assumption on the structure formation in the friction interlayer. The structure formation causes the suitable microscopic response of the soft-material, i.e. diffusion-relaxation involving propensity. A certain shortage of the method proposed points unavoidably to a lack of precise information about the involved force-field magnitudes, and their consecutive characteristics. Other studies do not show up such a drawback [6] but they do not deal in full and satisfactorily with the scale problem [1-4]. Our aim is to fill in the gap by proposing a scale-sensitive formulation of the friction-lubrication problem of importance for biolological systems, exemplifying by the articulating joints.

  • Open access
  • 54 Reads
Tunneling as a Source for Quantum Chaos

Classical chaos is generally defined as exponential divergence of nearby trajectories causing instability in the orbits with respect to initial conditions. The wave function may be thought of as representing an ensemble of points in phase space and a fast spreading of the wave packet can be compared with a rapid or exponential separation of neighboring trajectories in the classical case. We use an one dimensional model of a square barrier embedded in an infinite potential well to demonstrate that tunneling leads to a complex behavior of the wave function and that the degree of complexity may be quantified by use of the spatial entropy function defined by S= - Integral of |Psi(x,t)|squared ln |Psi (x,t)| squared dx

Chaos is supposed to imply increase of the entropy and a rapid rise of the entropy function can be understood as the burst of chaotic behavior. There is no classical counterpart to tunneling but a decrease in the tunneling may be interpreted as an approach of a quantum system to a classical system.

We show that changing the square barrier with barriers of increasing height/breadth not only decrease the tunneling but also slows down the rapid rise of the entropy function which for a low /thin barrier for small times is a fluctuating function around a smooth almost constant asymptotic value.

Also the mean square width of the wave packet shows a rapid rise for a low/thin barrier before entering a steady asymptotic mean.

We conclude that the complex behavior associated with tunneling and the rapid rise of entropy is similar to that expected from a chaotic dynamical system. We therefore suggest that the rapid spread of the wave function can be used as a definition for quantum chaos.

  • Open access
  • 131 Reads
Information Entropy of Single-Gene Expression Responses During Genome Wide Perturbations

Transcription factors (TF) are known to drive gene-to-gene interaction dynamics under optimal growth conditions, but lesser is known about how much they affect the dynamics of gene regulatory networks (GRN) at the global level, due to the contribution of many other variables.

We investigate how TF interactions of the GRN of E. coli affects the global entropy of single-genes response dynamics, during a genome-wide perturbation caused by a shift in RNA polymerase (RNAp) concentrations.

For this, we classified genes based on their number of (known) input TFs. Also, we assigned a value to each TF input (-1 for repression and +1 for activation) and classified genes based on the sum of its input interactions. For both classification schemes, we estimated the information entropy of the single-gene input interactions of each class.

Next, we measured by RNA-seq the fold changes of each gene due to weak, medium, and strong perturbations of RNAp concentration, from which we quantified the information entropy of single-gene responses of each class.

We found that the information entropy of the fold changes of the classes of genes increases (non-linearly) with the magnitude of the perturbation, in a manner that is consistent with the information entropy of the sum of the input interactions of individual genes, rather than their number of inputs.

Overall, we argue that, in the event of genome wide perturbations, asymmetries in input functions of TFs partially control the propagation of information between genes of the GRN of E. coli.

  • Open access
  • 59 Reads
Information flow in Color Appearance Neural Networks

Color Appearance Models are biological neural networks that consist of a cascade of linear+nonlinear layers that transform the measurements at the retina into an internal representation of color that correlates with psychophysical experience. The basic layers of these networks include: (1) chromatic adaptation (normalization of the mean and covariance of the color manifold), (2) change to opponent color channels (PCA-like rotation), and (3) saturating nonlinearities to get perceptually Euclidean color representations (dimension-wise equalization). The Efficient Coding Hypothesis in neuroscience argues that these transforms should emerge from information theory [Barlow Proc.Nat.Phys.Lab.59, Barlow Network 01]. In the specific case of color vision there are a number of evidences of this [Buschbaum Proc.Roy.Soc.83, Twer Network 01, Laparra&Malo Neural Comp. 12, Laparra&Malo Front.Human.Neurosci.15]. The question for these color networks is, what is the coding gain due to the different mechanisms in the networks?

In this work, representative Color Appearance Models are analyzed in terms of how they modify the statistical redundancy along the network and how much information is transferred from the input to the noisy response. The proposed information-theoretic analysis is done using methods and data that were not available before: (1) new statistical tools to estimate (multivariate) information-theoretic quantities between multidimensional sets based on Gaussianization [Laparra&Malo IEEE Trans.Neur.Nets.11, Johnson, Laparra & Malo ICML 19], and (2) new colorimetrically calibrated scenes in different CIE illuminations for proper evaluation of chromatic adaptation [Gutmann,Laparra, Hyvarinen & Malo PLOS 14].

Results identify the psychophysical mechanisms critically responsible for gains in chromatic information transference: opponent channels and their nonlinear nature are more important than chromatic adaptation at the retina. Moreover, our visual neural pathway allocates at least 70% of the information capacity for spatial information as opposed to only 30% devoted to color.

  • Open access
  • 111 Reads
Computing variations of entropy and redundancy under nonlinear mappings not preserving the signal dimension: quantifying the efficiency of V1 cortex

In computational neuroscience, the Efficient Coding Hypothesis argues that the neural organization comes from the optimization of information-theoretic goals [Barlow Proc.Nat.Phys.Lab.59]. A way to confirm this requires the analysis of the statistical performance of biological systems that have not been statistically optimized [Renart et al. Science10, Malo&Laparra Neur.Comp.10, Foster JOSA18, Gomez-Villa&Malo J.Neurophysiol.19].

However, when analyzing the information-theoretic performance, cortical magnification in the retina-cortex pathway poses a theoretical problem. Cortical magnification stands for the increase the signal dimensionality [Cowey&Rolls Exp. Brain Res.74]. Conventional models based on redundant wavelets increase the dimension of the signal by 1 order of magnitude [Watson CVGIP87, Schwartz&Simoncelli Nat.Neurosci.01]. Such increase implies a problem to quantify the efficiency of the transforms. In fact, previous accounts of the information flow along physiological networks had to do some sort of approximation to deal with magnification, e.g. (1) using orthonormal wavelets or preserving dimension [Bethge JOSA06, Malo&Laparra Neur.Comp.10] , or (2) using a reference for the relations introduced by the redundant transform [Laparra&Malo JMLR10, Gomez-Villa&Malo J.Neurophysiol.19].

In this work, we address the information theoretic analysis of such nonlinear systems that do not preserve dimension using no approximation. On the one hand we derive the theory to compute variations of entropy and total correlation under such transforms, which involves the knowledge of the Jacobian of the system wrt the input. To that end, we use the analytical results in [Martinez&Malo PLOS18]. On the other hand, we compare such predictions with a recently proposed non-parametric estimator of information-theory measures: the Rotation-Based Iterative Gaussianization [Laparra&Malo IEEE Trans.Neur.Nets11, Johnson, Laparra&Malo ICML19]. Consistency between the results validate the theory and provide new insights into the visual neural function.

  • Open access
  • 86 Reads
Comparative performance analysis of a Brownian Carnot cycle from the perspective of a stochastic model against the Linear Irreversible Thermodynamics theory.
Published: 05 May 2021 by MDPI in Entropy 2021: The Scientific Tool of the 21st Century session Thermodynamics

In this work we present a Brownian Carnot Cycle, which has already been studied by Schmield et al (2007) as well as Izumida and Okuda (2010); but now considering two different woring regimes, namely the Maximum Ecological Function (MEF) and the Maximum Efficient Power (MEP). Fort he MEF and MEP working regimes, the thermodynamic properties of the cycle are obtained, in particular, it showed that the máximum efficiency now depends on two parameters α and β, instead of only one parameter obtained previously by Schmield et al. In máximum power regime. It is worthwhile to notice that for characteristic values of α and β the original results obtained by Schmield are recovered.

From the previous observations, the authors consider that the results obtained represent a more general case that includes other working regimes. is important remark that one of the most astonishing results obtained, is that those thermal engine models show some universality regarding the behavior of the efficiency when it works at the maximum power regime [10], although the analyzed models were different in nature and scale.

  • Open access
  • 93 Reads
A stunning realisation: the touted defiance of Bell's inequality by quantum probabilities derives from a mathematical error

I shall display the mathematical error in the currently accepted derivation of the expected value of Bell's quantity “s” in the context of a gedankenexperiment on a single pair of photons in CHSH form. The fact that this mistaken value exceeds 2 supports the touted conclusion of quantum theorists that quantum probabilities defy Bell's inequality if the principle of local realism is presumed. The error is based on the neglect of four symmetric functional relations among the four components of s in a thought experiment designed to assess this principle. The expectation of the linear combination defining s is not twice the square root of 2 as is widely supposed, but rather is found to be an interval rounded to (1.1213, 2.0] when calculated via linear programming procedures. There are four dimensions of freedom in the coherent expectation polytope. I shall display the slices of this polytope as it passes through 3-D space. A comment on the maximum entropy distribution within this polytope will conclude the presentation. I shall introduce the contents of four papers relevant to the issue, which are available on Researchgate: Quantum violation of Bell's inequality: a misunderstanding based on a mathematical error of neglect; The GHSZ argument: a gedankenexperiment requiring more denken; Resurrecting the principle of local realism and the prospect of supplementary variables; More Hoojums than Boojums: quantum mysteries for no one. The GHSZ article has been published in Entropy.

  • Open access
  • 57 Reads
New parameters and extensive methodology to describe the three phase transitions in the q-states clock model

In the q-state clock model the spin has q possible orientations in the plane so it can be understood as a generalization of the Ising model for which q=2. The Hamiltonian is then the scalar product of the neighboring spins mediated by the ferromagnetic exchange interaction J homogeneous through the square lattice with L´L=N spins. It is known that for q≤4 there is only one phase transition at a temperature T1, over which the ferromagnetic phase is lost. Using global order parameters it has been previously established that for q≥5 this transitions moves steadily to lower temperatures as q increases [1]. For large L the appearing of the so called (Berezinskii–Kosterlitz–Thouless (BKT) phase characterized by vortex like structures is established, while a second transition to a disordered phase appears at a higher T2 temperature. In the present paper we deeply characterize the nature of this second transition by means of new local order parameters. Surprisingly, an unexpected subtle transition appears at a temperature slightly over the second one (at T3) requiring interpretation. This is resolved by considering pure and mixed ferromagnetic, vortex and paramagnetic phases as T increases requiring local order parameters and new methodology to better handle them. Thus, we include now information theory analysis by means of mutability and Shannon entropy characterization. Tendencies towards large N and q values are established.

[1] O.A. Negrete, P. Vargas, F. Peña, G. Saravia, and E.E. Vogel, Entropy 20, 933 (2018).

  • Open access
  • 107 Reads
The Intrinsic Entropy as Substitute for the Market Volatility of Underlying Securities

Abstract. Grasping the market volatility of underlying securities, and accurately estimating it in particular, are ones of the salient preoccupations of those involved in the securities industry and derivative instruments pricing.

This paper presents the results of employing the intrinsic entropy model as substitute for the market volatility of underlying securities. Diverging from the widely used volatility models that take into account only elements of the traded prices, namely Open, High, Low, Close prices of a trading day (OHLC), the intrinsic entropy model quantifies in as well the volumes traded during the considered time frame. We adjust the intraday intrinsic entropy model that we introduced earlier for the exchange-traded securities, in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropy probability or market credence associated to the corresponding price level.

The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Russell 2000, DAX Performance-Index, CAC 40, Hang Seng Index and Nikkei 225). We compare the results produced by the intrinsic entropy model with the volatility obtained for the same data sets using industry widely employed volatility estimators such as Parkinson (HL), Garman-Klass (OHLC), Rogers-Satchell (OHLC), Garman-Klass Yang-Zhang extension (OHLC) and Yang-Zhang (OHLC).

We consequently study the efficiency of the intrinsic entropy and volatility estimates by comparing them with the volatility of the standard close to close estimate. The intrinsic entropy model proves to consistently deliver a minimal estimation error for various time frames we experimented with, along with its peculiar indication regarding the market inclination toward either buying or selling the underlying security.

Top