**Quantifying Total Correlations between Variables with Information Theoretic and Machine Learning Techniques**

**Published:**17 November 2019 by

**MDPI**in

**5th International Electronic Conference on Entropy and Its Applications**session

**Information Theory, Probability, Statistics, and Artificial Intelligence**

**Abstract:**

The increasingly sophisticated investigation of complex systems requires more robust estimates of the correlations between the measured quantities. The traditional Pearson Correlation Coefficient is easy to calculate but is sensitive only to linear correlations. The total influence between quantities is therefore often expressed in terms of the Mutual Information, which takes into account also the nonlinear effects but is not normalised. To compare data from different experiments, the Information Quality Ratio is therefore in many cases of easier interpretation. On the other hand, both Mutual Information and Information Quality Ratio are always positive and therefore cannot provide information about the sign of the influence between quantities. Moreover, they require an accurate determination of the probability distribution functions of the variables involved. Since the quality and amount of data available is not always sufficient, to grant an accurate estimation of the probability distribution functions, it has been investigated whether neural computational tools can help and complement the aforementioned indicators. Specific encoders and auto encoders have been developed for the task of determining the total correlation between quantities, including the sign of their mutual influence. Both their accuracy and computational efficiencies have been addressed in detail, with extensive numerical tests using synthetic data. The first applications to experimental databases are very encouraging.

**Keywords:**machine learning tools; information theory; information quality ratio; total correlations; encoders; autoencoders

**Geert Verdoolaege**

Many thanks for a very interesting presentation. I have two questions:

1. Can you explain how exactly you obtain the correlation coefficients from the weight matrix of the autoencoder network?

2. For future work, you mention that the method can be extended to multiple variables. Are you referring to the case of correlation between a set of more than two variables? This would be a very interesting extension.

Thank you for your replies.

**Michele Lungaroni**

Thank you for your interest.

**Regarding question number 1:**

A neural network assigns to each input (i) of each neuron (n) of each layer (l) a weight: a_i_n_j.

The weight matrix in the paper is calculated as the chain of products of the weight matrix of each layer.

Suppose to have a simple neural network with two layers, three inputs and 2 neurons in the first layer and 3 in the second. The matrices of the two layers are:

A_layer,1 = [a_1_1_1 a_1_1_2 a_1_1_3; a_1_2_1 a_1_2_2 a_1_2_3]

A_layer,2 = [a_1_1_1 a_2_1_2; a_2_2_1; a_2_2_2; a_2_3_1 a_2_3_2]

And the weight matrix, which will be a 3x3 since 3 are the variables, is:

W = A_layer,1 * A_layer,2

**Regarding question number 2:**

Yes, we want to extend this work to the correlations of more variables.

The total correlation approach has been investigated using just one input and one output (referring to section 4). The method is easily extendable to a multivariable approach; however, it is of crucial importance to understand the discretisation of the variable, the effect of noise and signal frequency.

We are working on it and we plan to address most, if not all, of these issues in the extended version of the paper for the journal.

For further questions, please do not hesitate to contact the authors.

**Feiyan Liu**

I have a question: can you explain in details how to compute the correlation coefficients according to the new approach proposed in this paper? I am a little confused.

Thank you for your replies.