Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 38 Reads
State of the Art of Information Technology Computing Models for Autonomic Cloud Computing

Abstract: In the paper we present several models of computation for autonomic cloud computing. In particular, we justify that for autonomic cloud computing if we require perfect self-reflection, we need models of computation going beyond Turing machines. To approximate self-reflection, models below Turing  machines are sufficient. The above claims are illustrated using as an example the DIME Network Architecture.

  • Open access
  • 34 Reads
Symbolic Information In Computing Devices

The extended abstract demonstrates how mathematical modeling reveals new information about physical symbols on the computer. A system of models based on Mark Burgin’s named set theory uncovers a uniform pattern of data relations hidden in the relational database. It is called the Aleph data relation. Its parent/child relationship is self-similar to a tree structure. The discovery of Aleph paves the way for transforming relational data into decision trees automatically. The system of models that led to this discovery also provides our first glimpse into a deeper, more balanced view of 0’s and 1’s on the computer. When modeling the migration of the Aleph in a decision tree over four different states, each graphic depiction is different because each one responds to different feature in the setting. However, when the model in each state focuses solely on the Aleph’s physical symbols and their parent/child relationship its composition remains intact over time and space. With this mathematical evidence, the author concludes that a deeper mathematical system holds the Aleph and its symbols together. With this in mind, the author proposes a theory on digital symbols based on these models. He believes that all digital symbols on the computer are composed of two meta-symbols: 1) physical-values and 2) constructed-types. The computer system employs these meta-symbols to manipulate strings of zeroes and ones, in a linguistic fashion, as types and tokens.

  • Open access
  • 26 Reads
Transfer of genetic information: an innovative model

Nature’s Little Accounting Tricks

 

The sentence ‘this has been defined such’ brings forth differing consequences in the technical sciences and in the humanities. The former will arrive at rules that regulate how the concept defined fits within the system of other concepts in a clean fashion, free of contradictions. In the latter, one will immediately question the social power of the ruling class, and investigate, what advantages have been achieved, and for whom, by de-legitimising alternative concepts to the definition imposed, and whether the elite presently ruling still maintains the credibility of monopolising the interpretation of phaenomena observed. The former will achieve a system of explanations that is logically homogenous and free of conflicts; the latter thrives on conflicts and works with the dialectic of alternatives.

Nature, of course, keeps her calm with respect to this controversy among humans. She just keeps doing her things, in her own fashion. We, as humans, are left with no choice other than having to align our understanding, and efforts of understanding, of what, and according to which rules, Mother Nature does, to the facts observed. The facts show that Nature ignores some of our definitions, and appears to work according to rules that do not obey some of our definitions. The insights resulting from not understanding some of Nature’s processes will lead us to experiencing cognitive dissonance and some resistance in following avenues of thoughts that lead us outside of the system of what has been termed traditionally as ‘rational thinking’, insofar as rational thinking means following age-honoured systems of definitions.

As a psychologist, one cannot – and will not want to – avoid thinking in a fashion that accepts the existence of logical conflicts and of contradictions. One of the explanations, why classical Greek tragedies are of an immortal cultural value is, that in these each participant acts according to well-reasoned and impeccable logic: that is, no one is at fault – or “logically wrong” – by pursuing his goals: the situation makes the conflict unavoidable.

Not shying away from dealing with logical conflicts opens up wider perspectives in dealing with Nature’s machinations. Once one is ready to disregard traditional definitions, one is free to construct explanatory models that do describe Nature in a simple, easy and comprehensive fashion and deliver the penny that drops while one cries “Aha!”.

Specifically, if one disregards the definition of commutativity and observes that (a,b)→c is indeed different to (b,a)→c, and one also overcomes the dichotomy (commutative) ↔ (sequential), one has passed important milestones in understanding the nuts and bolts of how the combinatorics of the interplay between the sequenced DNA and the commutative organism actually functions.

My lecture invites the audience to work with new ideas. The trouble lies in accepting cognitive dissonance regarding to the eternal truth of definitions and in understanding that definitions are man-made and can well turn out to have been time-honoured, sensually seductive but nevertheless false. The shock the audience will experience is comparable to the shocks our forefathers had experienced while being confronted with the ideas that the Earth is not flat, and, later, that it circles the Sun: both assertions having been fundamental definitions of truth.

The new idea is presented by using irrefutable arguments that show some properties of cyclic permutations. We use some properties of natural numbers to show that ordering and reordering logical tokens – actually, instances of a+b=c – allows us to recognise patterns that show where is what, and when. The numbers are beyond any discussions of being right or wrong, they are just numbers. What we can read out of the numbers can open the eyes, and give rise to discussions about whether the concept is practically applicable and how. It certainly seems useful as a tool of explaining how the transfer of genetic information is engineered by Nature. This lecture is an exercise in forensic accounting and will unveil some of the cute little accounting tricks Nature uses while managing the business of genetics (and of the memory and of pattern recognition and of learning, to point out the most obvious applications of the principles of information management).

  • Open access
  • 110 Reads
Information as a complex notion with Physical and Semantic information substituting for Real and Imaginary constituents

Shannon’s Information theory was devised to improve communication systems performance and to assure an efficient and reliable message exchange over a communication channel. In this contexts, the question “what is information” per se has never been asked and was irrelevant to the engineering problems under consideration. The newly invented notion of “information measure” has served the design tasks pretty well. That led to a long lasting improper mixing and merging between notions of “information” and “information measure”, which, in turn, made the relations between notion of “information” and notions of “data”, “knowledge”, and “semantics”, blurred, intuitive and undefined.

 

However, recent advances in almost all scientific fields put an urgent demand for an explicit definition of what information is; especially, what is meaningful information that dominates today the contemporary life science research. To meet this demand, I have proposed a new definition of information, which in its last edition sounds like this:

“Information is a linguistic description of structures observable in a given data set”.

 

Here, I would like to provide some auxiliary arguments justifying this definition: Shannon’s Information Theory was devised to be used in communication systems, where the transmitted message is always shaped as a linear one-dimensional string of signal data. Even a TV image was once transmitted in a line-by-line scan fashion. However, human brain perceives image as a single two-dimensional entity. Providing an information measure for a two-dimensional signal is a problem not foreseen by the Information Theory. Therefore, I have wittingly chosen a digital image to explore my “what information is” definition.

 

A digital image is a two-dimensional set of data elements called picture elements or pixels. In an image, pixels are placed not randomly, but, due to the similarity in their physical properties, they are naturally grouped into some lumps or clusters. I propose to call these clusters primary or physical data structures.

 

In the eyes of an external observer, the primary data structures are further grouped into more larger and complex agglomerations, which I propose to call secondary data structures (structures of structures). These secondary structures reflect human observer’s view on the grouping of primary data structures, and therefore they could be called meaningful or semantic data structures. While formation of primary (physical) data structures is based on objective (natural, physical) properties of data, the subsequent formation of secondary (semantic) data structures is a subjective process guided by human conventions and habits.

 

As it was said, Description of structures observable in a data set should be called “Information”. In this regard, two types of information must be distinguished – Physical Information and Semantic Information. They are both language-based descriptions; however, physical information can be described with a variety of languages (recall that mathematics is also a language), while semantic information can be described only by means of natural human language. (More details on the subject could be find in [1]).

 

The segregation between physical and semantic information is the most essential insight about the nature of information provided by the new definition. Indeed, most of the present-day followers of Shannon’s Information Theory speak predominantly about Integrated Information Theory, Generalized Information Theory, United, Unified, Integral, Consolidated and so on “Informations”, cherishing the idea that semantic information can be seen as an extension of Shannon’s information and in some way be merged with it. Shannon personally has always distanced himself from such an approach and has warned (in 1956): “In short, information theory is currently partaking of a somewhat heady draught of general popularity. It will be all too easy for our somewhat artificial prosperity to collapse overnight when it is realized that the use of a few exciting words like information, entropy, redundancy, do not solve all our problems”, [2].

 

Although my definition of information as a complex notion composed of Real and Imaginary parts (in our case Physical and Semantic information) undoubtedly highlights the information duality, the mainstream information processing community persistently tries to treat them jointly.

 

From the point of view of my definition, all known today “informations” such as Shannon’s, Fisher’s, Renyi’s, Kolmogorov’s, and Chaitin’s informations – they all should be seen as physical information incarnations. Categorically, semantic information cannot be derived or be drawn from physical information. Despite of this, people persistently try to do that again and again.

 

Only from the point of view of my definition, the ambiguous relations between data and information, knowledge and information, cognition and information, could be clarified and made distinct. Floridi’s question “is information meaningful data?” now has to be answered decidedly: No! Information does not have any deal with data! Semantic information (semantic interpretation) is ascribed to physical information, and not to the data that carries it. The relations between knowledge and information could also be now expressed more correctly: knowledge is semantic information memorized in the system. Cognition (intelligence, thinking) is also become undeniably explicated: Cognition is the ability to process information, [4].

 

Only from the point of view of my definition, which declares and affirms the duality of information, one can understand and explain the paradigm shift, which we witness today in all fields of science: from a Computational (that is, Physical information processing (data processing) based) approach to a Cognitive (that is, Semantic information processing based) approach. None can deny this ubiquitously discerned paradigm shift: from Computer vision to Cognitive vision, from Computational linguistics to Cognitive linguistics, from Computational biology to Cognitive biology, from Computational neuroscience to Cognitive neuroscience, and so on – the list can be extended endlessly.

 

Only from the point of view of my definition, information descriptions are reified as text strings written in some language with a case-appropriate alphabet. That is, information now must be seen as a material entity, not a spiritual or a psychic impression, but a solid physical substance (information as a thing – once that has been a very debated topic). That requires an urgent revision of many well established notions and information processing practices (in brain-, neuro-, bio-, and many other life sciences).

 

I hope my humble opinion would be helpful when the time will come to face these issues.

 

 

References:

 

[1] Diamant, E., “Brain, Vision, Robotics and Artificial Intelligence”.

http://www.vidia-mant.info

 

[2] Shannon, C., “The Bandwagon”, (1956).

http://www.monoskop.org/images/2/2f/Shannon_Claude_E_1956_The_Bandwagon.pdf

 

[3] Floridi, L., “Is Semantic Information Meaningful Data?”, Philosophy and Phenomenological Research, Vol. LXX, No. 2, March 2005.

http://www.philosophyofinformation.net/wp-content/uploads/sites/67/2014/05/iimd.pdf

 

[4] Diamant, E., “The brain is processing information, not data: Does anybody knows about that?

https://www.researchgate.net/publication/291352419

 

 

 

  • Open access
  • 25 Reads
INFORMATION PHILOSOPHY, DOCUMENT AND DNA:The “document man” and the biobanks

1 Introduction: information, document and gene

 Since the nineteenth century, concern with a worldwide knowledge organization records, mainly focused on bibliographic studies, clearly recognizes that order presupposes control, and control presupposes a digitilisation of facts, things and subjects. The search for a universal bibliographic control, which will have the Belgian Paul Otlet as its maximum expression, will always be confused with a world project of spirits rise in the political and epistemological level, as well as a panoptic project of concentration, vigilance and security of the State. The unfolding of nineteenth-century ideas at the level of an information philosophy reveals the most urgent discussions of contemporary dilemmas: the production, ordering, and surveillance of genetic records, recognized as "human documents," sources of information about not only the subject in its singularity, but the plurality of the notion of "humanity."

The information philosophy current, identified in part as “neodocumentalist”, affirms the "documentary" condition of informational argumentation. It is a relation based on the debate between "material" and "immaterial". This is one of the central positions of Frohmann (2011, 2009, 2004) and other influential theorists on the current. Among the influences of this thought, we find objectively the ideas of Foucault (2010, 2002, 1970) and Wittgenstein's (1992a, b, 1979), in a clear demarcation of the philosophy of ordinary language.

Dialogues with the philosophy of language place the "neodocumental" perspective in direct approximation with the Heideggerian approach. There is too hear a the relationship between being and language, present in Rafael Capurro's information philosophy. The Capurro approach establishes a central force for language with the constituent element of human relations, especially when we observe the real from the information.

The unfolding of these neocumental and Capurro approaches leads us to one of the most flagrant empirical determinations of the debate on philosophy, ethics and society in the contemporary world, the condition of DNA and biobanks. Admittedly a moral dilemma for the twenty-first century, we have put into focus some central elements that underlie a gene ethics from the philosophy of information - and definitively establishes the man as document.

2 Document and language: preambles to DNA as document

In Lund's view (2009), the modern question of the document as a record of bureaucratic movements - of the modern state and its institutions - adds up two other aspects of its meaning: its condition of proof - that object that holds the truth of Declarations -, which led to the notion of authenticity gaining great relevance; Its "informational" condition, that is, of renseignement, or object that provides information - which, in a way, recovers the previous educational conception of the document.

The importance of the document to the modern world will be reflected in the relationship between society and science. In the nineteenth century, the word "documentation" gains great prestige among scientists and the various branches of management. From then on, the quality of the scientific work and the efficiency of the market depends on adequate and accurate documentation. It was not enough, in Lund's view (2009), merely the combination of logical arguments. It was necessary for the scientist to prove empirically - what it would mean to "demonstrate documentary" - the process and results of his research. This will be the setting for the birth of the first theory of the document, born with Paul Otlet - dealt with by Lund (2009) as a professional document theory.

Bernd Frohmann (2004, 2011) places himself in the field of Documentation and gives more or more importance to the study of the document - and, more than this, the materiality of the instruments of knowledge.

"Documentation recognizes as urgent an imperative to study ancient, medieval, or early modern documentary practices as those that feature electronic documents. What we do with electronic documents, how such practices are configured, and what they do to us are eminently worthy of study. But the digital form of contemporary documents creates no special philosophical imperatives, since the concept of documentary practices was there all along." (Frohmann, 2004, p. 406)

Frohmann (2011, p. 59) criticizes the naive vision that approaches the document as mere driver of information. The researcher developed the concept of "documenting", which refers to the capacity and power of the document in its arrangements with other elements of networks, or assemblages, "to generate marks, signs, or traces". According to the researcher, his focus on the materiality of documents is inspired by documentary movements from the turn of the nineteenth to the nineteenth, especially in the works of Paul Otlet and Suzanne Briet, who, in his view, insisted on the focus of material objects as documents, citing Famous example of the Briet antelope. (Frohmann, 2011, p. 57).

From the so-called angeletics, Rafael Capurro (1988) seeks a science of messages and messengers, both within the framework of the message-building phenomenon and in the context of action / sharing of the message (Smith, 2000). His interest, according to Smith (2000), would be to find a unified means of understanding information and understanding the role of information at the heart of human life and global society. It is the attempt of a unified definition to clarify the rationale of the concept of information.

In other words, Capurro (1988, 2008) proposes an information theory that is sustained in the theory of the message. It relates to the view that seizes the information society as a "message society" that evolves technologically and culturally. Information is taken as a message that makes a difference, either as a form or as a kind of offering of meaning. For the Capurrian vision, this theory refers both to the Greco-Latin notion of information, and to the communication perspective.

3 The DNA as a document

The term biobank came about in the late twentieth century. The earliest identified employment of the term is from 1985 (GODEC, 186), however, it was only in the second half of the 1990s that biobanks started to be developed in the way they are currently done.

This movement began in Iceland in 1996, when the US company deCode, created a biobank in the country, with high commercial interests. These intentions led to various protests by the Icelandic population and strict legislation was created (Árasonas, 2007, p. 2). Experiences of the introduction of the Icelandic biobank have served as a basis for lawyers, businessmen, researchers and governments around the world.

The emergence of biobanks became constant after the implementation of the Icelandic biobank, however, the growth of these spaces intensified after the release of the complete results of the human genome project in 2003. This project intensified the genetic research and made possible the existence of Large-scale DNA sample repositories.

One of the earliest definitions of biobanks was formulated in 2000 in Iceland and considers biobanks as "a collection of biological samples that are permanently preserved" (ICELANDIC BIOBANKS ACT, 2000 apud RIAL-SEBBAG, CAMBON-THOMSEN, 2010). However, biobanks do not store any kind of biological samples, these institutions are known to store exclusively human biological samples, as the Norwegian health institute points out: "A biobank is a collection of human biological material" (NORWEGIAN INSTITUTE OF HEALTH, 2012).

The term human biobanks usually refers to collections of samples of human body substances (e.g. tissue, blood, DNA) which are linked to personal data and socio-demographic information about the donors of the material. They have a dual nature as collections of samples and data (ETHIKRAT, 2004).

By this definition it is possible to identify, which are the types of biological samples stored, in addition, it is evidenced that biobanks are not mere repositories of human material, but rather spaces for the study and production of scientific information, besides of course, a repository of onformation of the individuals from which the samples were taken.

Due to the fact that they are large information repositories that foster research, biobanks are already considered as indispensable sites for the production of knowledge. For the Deutsches Biobanken-Register "Biobanks are a key prerequisite for modern medical research. By linking samples and clinical data they make it possible to clarify the causes and the course of diseases." (DEUTSCHES BIOBANKEN-REGISTER, 2017).

Biobanks are basically distinguished by the number of samples stored and the criteria for sample acquisition. One of the largest biobanks in the world is the UK Biobank which has collected DNA samples from about 500,000 British citizens, such biobank, as well as the Estonian biobank and the Qatar Biobank are known as poplation-based (Kelley et al, 2007), for storing samples of citizens of a specific country in large quantities. But there are biobanks with very specific acquisition criteria such as the Chernobyl Tissue Bank, which stores samples of people who were exposed to radiation during childhood (CHERNOBIL TISSUE BANK, 2017).

In this way, it is possible to perceive that the samples are the central element in, since they are the informational and DNA physical evidences, constituting the main source of biobanks research resources.

Briet is in the canonical tradition of Information Science the first to present the notion of "living documents". For her, far beyond form, the document may be something that was not created by man and does not consist of a medium where information was entered. Obeying the criteria established by Oltet, the document is something that transmits information and works as evidence, and is also something that can generate other documents.

Jean Meyriat, as pointed out by Ortega and Lara (2010), is one of the disciples of Otlet and Briet's work on the document concept. In this way, Meyriat developed a kind of complement to the works of Otlet and Briet in dealing with the purposes of understanding document as object.

Meyriat points out that there are two types of document, one that is clearly a document (document par intention), since it is a product developed by man to perform this function, and an object that has come to be considered document (document par attribution), even if it was not created for it, and, due to to any need or circumstance, has become informative. In this sense, Meyriat approaches the condition pointed out by Otlet so that something can be considered a document: the object in question has a function of evidence.

The author states that any object can be considered a document, even if it was not created for it, since it configures a source of information and support for a message. However, Meyriat points out that a document is really only document when used as such, that is, the author does not treat the documents in a binary way (Meyriat, 1981).

This assertion is based on the fact that the will of the creator of the document is not sufficient to sustain it as such. If a document by intention is used for an purpose that is not related to the transmission of information, this object will not be a document, because it will always be necessary to obtain information from the object in question. Thus, even when it comes to a document by intention, that is, anything created to have the function of document, the will of the creator of the document is not sufficient to guarantee that it will be used as a document (Meyriat, 2001, p. 144-145).

Thus, we can understand that Meyriat does not restrict the product of human activity by man-made items, but maintains that any item that undergoes human intervention and is used as a source of information, having the character of evidence, can be considered a document by attribution.

In addition to the document itself, Meyriat shows that a document by assignment needs to legitimize an institution to become a document. The author also states that documents are generated and legitimized by a techno-social system, that is, documents are the fruits of an era and the structures in which its creators (Meyriat, 2001, pp. 151-152).

If, as pointed out by Marteleto and Couzinet: "It is necessary to rethink the document as a permanent polymorphous object" (Marteleto, Couzinet, 2013, p.7), it is possible to consider that such biological samples as documents insofar as they will serve as support for documentary evidence for the generation of other documents composed of written records and whose primary function is to be a document, that is, human biological samples have the necessary functions to be characterized as documents, since they present evidence .

4 “Document man”: ethics "for" a library of records of human biovestiges

The current condition of biobanks touches, objectively, the views of the neodocumental perspectives and of Rafael Capurro. In other words, the information philosophy, in the Frohmann and Capurro view, linked to a philosophy of the document and to an intercultural ethics of information.

Frohmann (2000) shows us that a contemporary ethics linked to informational dilemmas depends on the consolidation of a critique of cyber ethics, that is, the relation between cybernetics and morality. In dialogue with Rafael Capurro, Frohmann (2000) identifies a post-cybernetic dialectic between bodies and bytes. The author states that the question of materiality constitutes a centrality for the construction of a true ethical plan of criticism of informational dilemmas.

The algorithms as parts in the cybernetic plane of the human body are parts of this same body. We are in a material plane where the subject is the case, and not the machine.

"Whatever is special about information ethics derives from the speci¢city of the information services provided to speci¢c publics. It is therefore analogous to legal ethics, medical ethics, dental ethics, or the ethics of plumbers. Like these other ¢elds, much of what is unique to it consists in applying ethical principles to the speci¢c services it provides. These applications should, I suggest, be driven by an ethics of acknowledged dependence, and a materialist information theory. Once we abandon the animal world for the spectral terrain of angels, where pure information £ows from spirit to spirit, we may gain the satisfaction of inhabiting an ethical zone belonging just to us, but we lose the virtues we need to grapple with serious ethical issues". (Frohmann, 2000, p. 434-435)

The subject's perspective emerges as a "material" expression of culture. Ethics is, therefore, a movement of relations between bodies in a cultural context, including the web. It is an ethics that conceives the subject-document, the man-document, but always in the condition of the "other", of the otherness.

"The ethics of alterity, opposed to a transcendental ethics, “ethics of the Lord´s eyes”, from the Lord´s point of view, or, still, the “ethics of the angels” (incapable of conceiving and knowing the presence and the power of the presence of a certain Wall), now becomes an “intercultural ethics of information”, capable not only of recognizing that the Wall is there, but of looking for ways of “knocking it down” - if not physically, in its symbolic structure altogether, presenting new possibilities for multiple worlds that exist in each culture. In this context poiesis presents itself: the maker of discourses, the poet, “expelled” from the city in a platonic transcendental ethics, and brought back into the scene by Rhetoric and by Aristotelian Poetics, has a voice. Homer, the city´s poet, “pops up” then in the German library thinking the world through words." (Saldanha, 2016, p. 261)

In the sense of the intercultural ethics of Rafael Capurro's and Frohmann's information philosophy structured in the philosophy of the document, the dimensions of alterity and culture stand above a centrality of the "human" as the mark of a universal ethics. Against the universalism of a common ethic, respecting the different materialities, that is, the expressions of life of the subjects in each community, ethics in the genetic plane is, under these philosophies, centered on the condition of the subject-documented in its contextuality.

5 Final remarks

The "libraries of people" are therefore houses recognized as spaces of central ethical tension, where the condition of human alterity must prevail, not of centrality. The biobanks and the condition of the documentary man place us before the limits of barbarism and of a possible humanism. The plurality proposed by Capurro and Frohmann allows us, in our view, to construct the necessary dialogue on the permanent removal of the risks of a barbarism related to the "non-human" uses of "human beings", that is, to prevent Wiener's cybernetics, Applied to the development of biobanks, is no longer a possibility of expansion of life and becomes a weapon for its extinction.

The language of DNA understood as the ability to know the most distinct sub-elements that define the subject in its biological expression can not overlap the cultural subject, that is, the document-man is a historical subject. However, as a document, such subject is susceptible of uses and reuses, according to each socio-historical intentionality.

Biobanks are currently such a borderline condition: the intense production of studies and records on human beings tends to create multiple repositories of human data. These repositories can constitute safeguard reserves of specific cultural problems or turn into core weapons for political struggles and military uses, allowing biomassacres. The histories and philosophical lessons of interculturality and documentality can serve as fundamental ethical models to resolve the risks of such barbarism represented by the second case.

Acknowledgment

This research was developed under the financial support of the National Council for Scientific and Technological Development (CNPq), Brazil, and the Carlos Chagas Foundation for Research Support of the State of Rio de Janeiro (FAPerj), Rio de Janeiro, Brazil.

  • Open access
  • 47 Reads
Information, Constraint and Meaning from the pre-biotic world to a possible post human one. An Evolutionary approach

The presentation proposes to complement an existing development on meaning generation for animals, humans and artificial agents by looking at what could have existed at prebiotic times and what could be a post-human meaning generation.
Meanings do not exits by themselves. The core of the approach is based on an existing model for meaning generation: the Meaning Generator System (MGS). The MGS is part of an agent submitted to an internal constraint. The MGS generates a meaningful information (a meaning) when it receives information that has a connection with the constraint. The generated meaning is used by the agent to implement an action (external or internal) aimed at satisfying the constraint. The action can be physical, biological or mental.
The purpose of the presentation is to widen the MGS approach in order to reach a coverage for information, constraint and meaning starting at a pre-biotic level and going up to a possible post-human one.
We begin by presenting the MGS for animals, humans and artificial agents with the corresponding constraints (https://philpapers.org/rec/MENCOI). We then look at what could have been a local constraint at a pre-biotic far from thermodynamic equilibrium level (https://philpapers.org/rec/MENMGF-2) and propose a possible post-human status by an evolution of the anxiety management processes (https://philpapers.org/rec/MENPFA-3 , https://philpapers.org/rec/MENCOO).
Such approach makes available links between information science and physics, evolution, anthropology, semiotics and human mind.
Continuations are proposed.

  • Open access
  • 29 Reads
Novel Approach: Information Quantity for Calculating Uncertainty of Mathematical Model

Novel Approach: Information Quantity for

Calculating Uncertainty of Mathematical Model

Extended abstract

B.M. Menin

Almost all the readers remember the idea of ​​our distant ancestors, according to which our Earth is surrounded by a glass dome with stars and planets reinforced on it. However, Democritus tried to explain to the masses the simple truth that the Earth is just a small particle in a vast universe, but Aristotle's picture was closer to those in power, so it lasted thousands of years.

Everything comes to an end, and modern science, totaling only 500 years, has made a real revolution in the consciousness of human individuals. Today without genetics, Big Ben, information theory, quantum electrodynamics and the theory of relativity, it is difficult to imagine the realization of flights into space, genetic engineering, nuclear power plants and, simply in theory, the era of relative abundance.

Now it is possible due to the widely used method of modeling in recent decades. It is based on accounting for a huge number of variables, the use of powerful computers and modern mathematical methods. That is why, in the scientific community the prevailing view is the more precise the instrument used for the model development, the more accurate the results. For example, 3,000 parameters are used for the program “Energyplus” elaborated by US department of Energy. However, energy simulation results can easily be 50–200% of the actual building energy use.

What can be done in order to overcome the apparent contradiction? In this case, the theory of similarity comes to the rescue. Applying the theory of similarity is motivated by the desire to generalize obtained results in the future for different areas of physical applications. In the study of the phenomena occurring in the world around us, it is advisable to consider not individual variables but their combination or complexes which have a definite physical meaning. Methods of the theory of similarity based on the analysis of integral-differential equations and boundary conditions, allow the identification of these complexes. In addition, the transition from dimensional physical quantities to dimensionless variables reduces the number of variables taken into account. But this is not enough to simplify the calculations.

Human intuition and experience suggests the simple, at first glance, truth. For a small number of variables, the researcher gets a very rough picture of the process being studied. In turn, the huge number of accounted variables can allow deep and thorough understanding of the structure of the phenomenon. However, with this apparent attractiveness, each variable brings its own uncertainty into the integrated (theoretical or experimental) uncertainty of the model or experiment. In addition, the complexity and cost of computer simulations and field tests increases enormously. Therefore, some optimal or rational number of variables that is specific to each of the studied processes must be considered in order to evaluate the physical-mathematical model.

In this case, the theory of information came to the aid of scientists. It happened because of the fact that simulation is an information process in which a developed model receives information about the state and behavior of the observed object. This information is the main subject of interest in the theory of modeling.

The model is a framework of ideas and concepts from which a researcher/observer interprets his intuition, experience, observations and experimental results. It includes physical structure-model and mathematical structure-model. Physical model is expressed as a set of natural law’s inherent to the recognized object. It interprets a mathematical model, including its assumptions and constraints. Mathematical model is a set of equations using symbolic representations of quantitative variables in a simplified physical system. It helps modeler to understand and quantifies physical model, thus enabling the physical-mathematical model to make precise predictions and different applications.

The process of formulating a physical-mathematical model can be called processing information when the information and/or its initial representations about the object under study do not change, but new information is created. Physicists and engineers receive information about the observed process and can develop scientific laws and analyze natural phenomena or technical processes only on the basis of this information.

In other words, the observer knows about a particular phenomenon only if this object has a name in the observer's mind, and in his mind there are data that represent the properties of the object. It should be emphasized that any observer or group of scientists is not ideal, because, otherwise, they should be able to potentially acquire endless knowledge.

Thus, scientists came to the brilliant idea of quantifying the uncertainty of a conceptual model based on the amount of information embedded in the model and conditioned only by the selection of a limited number of variables that must be taken into account. This idea is based on thermodynamic theory, concepts of Mark Burgin’s general theory of information. It includes two axioms.

The first is that general knowledge of the world is significantly limited by the act of choice a System of Primary Variables. Whatever people know, all scientific knowledge, depends only on information framed by SPV. As an example of SPV, SI (International system of units), or CGS (centimeter-gram-second) may be offered. The number of dimensional variables included in SPV is finite. SPV is a set of variables (primary and, designed on their basis, secondary), which are necessary and sufficient to describe the known nature laws, as in quality physical content and quantitatively.

Secondly, the number of variables considered in the physical-mathematical model is limited. The limits of the description of the process are determined by the choice of the class of phenomena (CoP) and the number of secondary parameters considered in the mathematical model. CoP is a collection of physical phenomena and processes described by a finite number of primary and secondary variables that characterize certain features of the observed phenomenon from the qualitative and quantitative aspect.

For example, for the combined processes of heat exchange and electromagnetism, it is useful to use the primary SI dimensions: length L, M is weight, T is time, Θ is temperature, and I is electric current, i.e. CoPSILMТQI. In thermodynamics, the basic set of primary dimensions often includes L, M, T, and the thermodynamic temperature Θ, that is, CoPSILMТΘ. If SPV and CoP are not specified, then the definition of "information about the phenomenon being investigated" loses its validity, and the information quantity can increase to infinity or decrease to zero. Without SPV, the simulation of the process is impossible. As noted by the famous French physicist Brillouin, "You cannot get anything out of nothing, even observation." You can interpret SPV as the basis of all available knowledge that people can have about the surrounding nature at the moment.

To this we should add that the researcher chooses variables for the model describing the observed object, based on his experience, knowledge and intuition. These variables can fundamentally differ in nature, qualitatively and quantitatively, from another group of variables selected by another group of scientists. Such, for example, happened when studying the motion of an electron, like particle or wave. Therefore, it is possible to consider the choice of a variable as a random process, and accounting a particular variable will be equally probable. This approach completely ignores the human evaluation of information. In other words, a set of 100 notes played by chimpanzees, and a melody of Mozart’s 100 notes in his Piano Concerto No.21-Andante movement, have exactly the same amount of information.

It’s perhaps a surprising fact that basing on the above mentioned assumptions you can get a very simple, from the point of view of mathematics, formula for calculating the uncertainty of a mathematical model describing the observed phenomenon. And this uncertainty determines a limit on the advisability of obtaining an increase of the measurement accuracy in conducting pilot studies or computer simulation. It is not a purely mathematical abstraction and it has physical meaning. This relationship testifies that in nature there is a fundamental limit to the accuracy of measuring any observed material object, which cannot be surpassed by any improvement of instruments and methods of measurement. The value of this limit is much more than the Heisenberg uncertainty relation provides and places severe restrictions on the micro-physics.

The proposed method was used to analyze the results of measurements of the Boltzmann constant and the gravitational constant published in the scientific literature during 2005-2016. The results are in excellent agreement with the CODATA recommendations (Committee on Data for Science and Technology).

  • Open access
  • 89 Reads
Information Taxonomy

There is a diversity of different types and kinds of information. To organize this huge collection into a system, it is necessary to classify information with respect to various criteria developing a Multiscale information taxonomy, in which each dimension is an aspect information taxonomy. We construct such a multiscale information taxonomy based on the general theory of information (Burgin, 2003; 2004; 2010) and making use of its principles and technical tools.

It is important to understand that taxonomies are not auxiliary edifices in science but they are also laws of science when scientifically grounded and validated. For instance, the biological taxonomy of Carolous Linnaeus is a law of biology in the same way as Newton’s laws are laws of physics.

Here we follow taxonomic traditions of Linnaeus Carolous Linnaeus and Charles Saunders Peirce in the direction of information science. On the one hand, the results of our research connect new information science and technology with classical science demonstrating intrinsic links between information theory and profound results of Linnaeus. On the other hand, these results show unity in achievements of scientists working in different countries and on different continents such as biological classification of Linnaeus, chemical classification of Mendeleev, semiotic classifications of Peirce, classifications of subatomic particles in contemporary physics and classifications in information science developed here. We begin with a brief exposition of methodological principles of taxonomy construction and then apply these principles to the development of basic information taxonomies. Here we describe only some of them due to the space restrictions.

  1. Principles of taxonomy construction

Having a multiplicity of objects, it is necessary to induce organization because it can help to study, understand and utilize this multiplicity. Organization is achieved by structuration of the multiplicity. An efficient technique of structuration is construction of taxonomies, classifications, typologies and categorizations. Let us consider the process and basic principles of taxonomy construction.

Taking a multiplicity of objects M, a researcher explicates objects’ properties molding aspects or amalgamated features of M. Then the researcher elucidates a criterion for each aspect. This allows us to form a scale for measuring/evaluating each aspect. Such a scale together with the corresponding criterion allows the researcher to build an aspect taxonomy. Combining together all aspect taxonomies, the researcher obtains a multiscale taxonomy of the multiplicity M.

It is important to understand that according to the contemporary methodology of science, there are three types of scientific laws: classificational, equational and implicational laws. Scientists traditionally consider only two latter types as laws of nature although the first type also reflects important regularities in nature and society.

An equational law has the form of an equation, for example, of a differential equation as many laws in physics, e.g., E = mc2, chemistry or economics.

An implicational law has the form of an implication “If A, then B”. For instance, if ∆ABC is a right triangle, then its sides satisfy the equation c2 = a2 + b2. It is a mathematical law called the Pythagorean theorem.

A classificational law has the form of a classification, typology or taxonomy. The biological taxonomy of the great biologist Linnaeus Carolous Linnaeus (1707-1778) and triadic typologies of the great logician Charles Saunders Peirce (1839-1914) are examples of classificational laws.  

In addition, scientific laws can be qualitative and quantitative.

A quantitative law describes relations between quantitative characteristics of definite phenomena. For instance, Newton’s law of motion ma = F is a quantitative law of physics.

A qualitative law describes relations between qualitative characteristics of definite phenomena. For instance, Galilean law of motion “Every body continues its state of rest or of uniform motion in a straight line unless it is compelled to change that state by forces impressed upon it” is a qualitative law of physics. Classificational laws are usually also qualitative laws.

These methodological findings determine a higher scientific status and importance of the groundbreaking Linnaeus’ classification, as well as of the taxonomies constructed in this paper. Namely, this new understanding of scientific laws shows these taxonomies are qualitative laws of information science.

Note that while equational and implicational laws have been acknowledge in science from its very origin, classificational laws acquired their nomological status only recently in the structure-nominative direction of methodology of science.

  1. Three basic taxonomies of information

We begin with the uppermost level of the taxonomic arrangement, which includes a huge diversity of types, kinds, sorts, categories and classes of information. On this level, we build the existential taxonomy

As information is an omnipresent phenomenon (Burgin and Dodig-Crnkovic, 2011), it is crucial to start its classification on the global level of the whole world. This thesis implies the conjecture that the structure of the world affects existence of forms of information, which correspond to this structure. The large-scale structure of the world is represented by the Existential Triad of the World (Burgin, 2012):

  • Physical World
  • Mental World
  • World of Structures

In the Existential Triad, the Physical (material) World is conceived as the physical reality studied by natural sciences, the Mental World encompasses different levels of mentality, and the World of Structures consists of various forms and types of structures.

The existential stratification of the World continues the tradition of Plato with his World of Ideas (Plato, 1961) and the tradition of Charles Sanders Peirce with his extensive triadic classifications (Peirce, 1931-1935).

This stratification brings us to the phenomenon studied by the general theory of information and called information in a broad sense (Burgin, 2010). According to this approach, information in a broad sense is represented in each of the three worlds. In the Physical (material) World, it is called energy supporting in such a way the conjecture of von Weizsäcker that energy might in the end turn out to be information (Weizsäcker, 1974). Situated at the first level of the Mental World, individual mental energy includes psychic energy studied by such psychologists as Ernst Wilhelm von Brücke (1819-1892), Sigmund Freud (1856-1939) and Carl Gustav Jung (1875-1961). Information in a broad sense, which is situated in the World of Structures, is called information in a strict sense.

As a result, we have three types of information in the global existential taxonomy:

  • Physical-world information or energy
  • Mental-world information and its particular case, mental energy
  • Structural-world information or information per se defined as information in a strict sense

We will not analyze here the first two kinds of information in a broad sense as the first of them belongs to the scope of physics, while the second one is in the domain of psychology. Our concern is information in a strict sense or simply information.

The developmental taxonomy is brought on by the temporal aspect of information:

  • Potential information
  • Actualized information
  • Emerging information

Let us consider some examples.

Example 1. Information in a book before somebody reads it is potential.

It is possible to measure potential information by its potential to make changes in the corresponding infological system. For instance, measuring potential epistemic information, we estimate (measure) potential changes in the knowledge system (Burgin, 2011).

Example 2. Information that already gave knowledge about something, e.g., information about observation of a positron obtained by Carl Anderson in 1932, is actualized.

It is possible to measure actual information by changes it made in the corresponding infological system. For instance, measuring actualized epistemic information, we determine (measure) changes in the knowledge system made by reception of this information (Burgin, 2011).

Example 3. Information in a computer, which processes this information or in the head of a person who thinks about it, is emerging.

It is possible to estimate emerging information by its potential to make changes, by transformations it made in the corresponding infological system and by the rate of ongoing transformations. For instance, measuring emerging epistemic information, we estimate (measure) what changes in the knowledge system have already been made and reckon the rate of ongoing changes.

Based on the extended triune model of the brain developed in (Burgin, 2010), we have the following bifocal formation/action taxonomy/typology, in which the first facet reflects the form nature of information existence while the second facet mirrors action category of information existence:

  • Epistemic (form) or cognitive (action) information
  • Instructional (form) or effective (action) information
  • Emotional (form) or affective (action) information

In this taxonomy, the first term/name of each class represents the form of the corresponding infological system, while the second term/name represents action of information. It means that the first nominal attribute of the taxonomic classes characterizes formation aspects of information while the second operational attribute of the taxonomic classes characterizes procedural aspects of information.

  1. Taxonomies suggested by other authors

Banathy (1995) consider three important types of information. With respect to a system R, it is possible to consider referential, non-referential, and state-referential information.

  1. Referential information has meaning in the system R.
  2. Non-referential information has meaning outside the system R, e.g., information that reflects mere observation of R.
  3. State-referential information reflects an external model of the system R, e.g., information that represents R as a state transition system.

Braman (1989) classifies roles of information:

1) information as a resource, coming in pieces unrelated to knowledge or information flows into which it might be organized;

2) information as a commodity is obtained using information production chains, which create and add economic value to information;

3) information as a perception of patterns has past and future, is affected by motive and other environmental factors, and itself has its effects;

4) information as a constitutive force in society, essentially affecting its environment.

All constructed taxonomies together form a hierarchical multiscale information taxonomy, which gives a systematic picture of information.

References

Banathy, B.A. (1995) The 21st century Janus: The three faces of information, Systems Research, v. 12, No. 4, pp. 319-320

Braman, S. (1989) Defining information: An approach for policymakers, Telecommunications Policty, v. 13, No. 1, pp. 233-242

Burgin, M. Theory of Information: Fundamentality, Diversity and Unification, World Scientific, New York/London/Singapore, 2010

Burgin, M. (2011) Epistemic Information in Stratified M-Spaces, Information, v. 2, No.2, pp. 697 - 726

Burgin, M. Structural Reality, Nova Science Publishers, New York, 2012

Burgin, M. and Dodig-Crnkovic, G. (2011) Information and Computation – Omnipresent and Pervasive, in Information and Computation, World Scientific, New York/London/Singapore, pp. vii – xxxii          

Linnaeus, C. Systema naturae, sive regna tria naturae systematice proposita per classes, ordines, genera, & species, Johann Wilhelm de Groot for Theodor Haak, Leiden, 1735

Peirce C. S. (1931-1935) Collected papers, v. 1-6, Cambridge University Press, Cambridge, England

von Weizsäcker, C.F. Die Einheit der Natur, Deutscher Taschenbuch Verlag, Munich, Germany, 1974

  • Open access
  • 29 Reads
Structures and Structural Information

 

Structures and Structural Information

Mark Burgin

University of California, Los Angeles, 520 Portila Plaza, Los Angeles, CA 90095, USA

Rainer Feistel

Leibniz Institute for Baltic Sea Research, Warnemünde, D-18119, Germany

 

Everything has a structure and this structure makes things such as they are. This was declared by Aristotle for material things and demonstrated in the general theory of structures developed in (Burgin, 2012; 2016) for the whole generality of existing and possible objects. This is the core reason of importance of structural information, which provides and/or changes knowledge about structures.

As Brinkley writes, “implicit in the word "structure" is not only the concept of elementary units or parts, but also the interdependence and relationships of those parts to form a whole and it can thus be argued that modern science has adopted a structural approach to understanding the natural world, in which parts are defined and the interactions among them are explored” (Brinkley, 1991; 1999).

Structural information is the core of structuralism, the heart of structural realism and the basic essence of structural informatics.

The goal of this work is to study structural information based on the general theory of information (Burgin, 2003; 2004; 2010; 2011), research of Feistel and Ebeling (Feistel and Ebeling, 2011; 2016; Ebeling and Feistel, 2015; Feistel, 2016) and works of other authors in this area. The goal is developing more comprehensive and advanced knowledge about structural information.

It is possible to comprehend structural information in different ways. For instance, Bates (2005) treats structural information as “the pattern of organization of matter and energy,” while Reading (2006) defines it as “the way the various particles, atoms, molecules, and objects in the universe are organized and arranged.”

At first, we consider the approach developed in the general theory of information. The general theory of information discerns information in the broad sense (Ontological Principle O2) and information in the strict sense (Ontological Principle O2a).

Structural information is information in the strict sense being defined as a capacity to change the subsystem of knowledge in an intelligent system.

This definition allows getting key properties of structural information. Let us consider some of them.

  1. Structural information can be more correct or less correct.

Correctness of structural information about a system depends on correctness of knowledge produced by this information (Burgin, 2016). As we know, some knowledge can be more correct, better representing the essence of the system, while other knowledge is less correct, providing a worse representation of the fundamental nature of the system.

Here are two examples.

Example 1. For a long time, people believed that the Earth was flat, i.e., had the structure of a plane.

Then scientists found that the Earth had the structure of a ball.

Then scientists assume that the Earth had the structure of a geoid.

Example 2. For a long time, people believed that in the structure of the Solar system, the Sun rotated around the Earth.

Then scientists found that the Earth rotated around the Sun and the orbit had the structure of a circle.

Then scientists assume that the Earth rotates around the Sun and the orbit had the structure of an ellipse.

2. As a rule, structural information about a system is not unique.

Many researchers believe that each (at least, a natural) system has a unique structure. At the same time, according to the general theory of structures (Burgin, 2012), any system has several structures. For instance, the structure of a table on the level of its parts is essentially different from the structure of this table on the level of molecules as well as from the structure of this table on the level of its parts such as legs. In essence, material systems, which people can see with their eyes and touch with their hands, have structural information on different levels.

3. Structural information about a system is inherent to this system.

Indeed, as it is stated above, structure makes things such as they are. Naturally, structural information reflects this identity of things although structural information about different systems and objects can be similar.

4. Processes in a system can change structural information about this system.

Indeed, the evolution (development) of a system can produce an essentially new structure when the system is changed, even becoming another system. For instance, butterflies have the four-stage life cycle. In it, winged adults lay eggs, which later become caterpillars, which later pupate in a chrysalis, while at the end of the metamorphosis, the pupal skin splits and a butterfly flies off.

5. Structural information about a system describes this system to a definite extent of precision, i.e., structural information can be more precise and less precise.

For instance, the Copernican model (structure) of the Solar System is more precise than the Ptolemaic model (structure) of the Solar System. Another example comes from mathematics where mathematicians are striving to find the decimal structure of the number p with higher and higher precision.

6. For complex systems, it is possible to consider structural information on different levels and various scales.

For instance, it is possible to treat the structure of a human being on the level of visible parts, on the level of its functional systems, on the level of inner organs, on the level of cells, on the level of chemical compounds or on the level of molecules.

7. Structural information about a subsystem of a system is not always a part of the structural information about this system.

For instance, when we consider an organism as a system of its visible parts, the structure of its nervous system is not a part of this structure.

8. The process of conversion of structural information about a system into knowledge about this system is, in essence, structuration of this system.

Note that the general theory of information provides other possibilities for defining structural information. For instance, it can be information that changes the system of beliefs of an intelligent system.

At the same time, Feistel and Ebeling suggest the vision of structural information, in which structural information may no longer be restricted to changing just “knowledge in an intelligent system”, and may more generally be defined as the capacity of a physical system, the “carrier of structural information”, to cause changes in a second physical system, the “receiver of structural information” (Feistel and Ebeling, 2011; 2016; Ebeling and Feistel, 2015; Feistel, 2016).

Indeed, people get information about different objects in the form of raw data. Only after reception of this information, the brain converts these data into knowledge and this knowledge is often about the structure of studied objects.

If in particular, the receiver is the same system as the carrier but at some later point of time, reversible microscopic dynamics described by the Liouville equation is universally understood as “conserving [microscopic] [structural] information” (Hawking, 2001; Zhang et al., 2013). In contrast to this, irreversible macroscopic dynamics is commonly associated with a loss of [macroscopic] [structural] information, directly related to the growth of thermodynamic entropy (Feistel and Ebeling, 2011; 2016; Ebeling and Feistel, 2015; Feistel, 2016). In the sense of Planck (1966) who wrote that “a macroscopic state always comprises a large number of microscopic states that combine to an average value”, macroscopic structural information represents a portion of the microscopic structural information of a given system.

Structural information available from a carrier depends on the receiver determining what portion of this information is actually received. If, for example, the receiver is a thermometer and the carrier is liquid, then all information received is the temperature of the liquid. Structural information can be extracted from a given system by “measurement” when e.g. a sensor is used as a receiver. Structural information can be quantified if it is comparable to the structural information of a reference system, such as the length scale of a mercury thermometer.

A numerical value being the result of a comparison between the same kinds of structural information available from two different systems, such as by counting their parts, is a “measurement result”. Numbers represent information in the symbolic form, or as “symbolic information”. The meaning of symbolic information is subject to convention (such as what “reference” system is used) and is no longer a portion of the structural information of the carrier, such as printed symbols on a sheet of paper. Very different structural information carriers can carry the same symbolic information. Symbolic information is restricted to the realm of life (Feistel and Ebeling, 2011; 2016; Ebeling and Feistel, 2015), such as in the form of genetic DNA molecules or human knowledge, and emerged from structural information in the course of evolution by a transition process regarded as ritualisation.

 

References

Bates, M. J. (2005) Information and knowledge: An evolutionary framework for information science, Inform. Res., 10 (4) (http://informationr.net/ir/10-4/paper239.html)

Brinkley, J.F. (1991) Structural informatics and its applications in medicine and biology. Academic Medicine, v. 66, pp. 589-591

Brinkley, J.F. What is Structural Informatics? 1999 (brinkley@u.washington.edu)

Burgin, M. Information: Problems, Paradoxes, and Solutions, TripleC, v. 1, No.1, 2003, pp. 53-70         (http://triplec.uti.at)

Burgin, M. (2004) Data, Information, and Knowledge, Information, v. 7, No.1, pp. 47-57

Burgin, M. Theory of Information: Fundamentality, Diversity and Unification, World Scientific, New York/London/Singapore, 2010

Burgin, M. (2011) Information Dynamics in a Categorical Setting, in Information and Computation, World Scientific, New York/London/Singapore, pp. 35 - 78

Burgin, M. Structural Reality, Nova Science Publishers, New York, 2012

Burgin, M. Theory of Knowledge: Structures and Processes, World Scientific, New York/London/Singapore, 2016

Ebeling, W. and Feistel, R. (2015) Selforganization of Symbols and Information, in Chaos, Information Processing and Paradoxical Games: The Legacy of John S. Nicolis, (Nicolis, G.; Basios, V., Eds.) World Scientific Pub Co., Singapore, pp. 141-184

Feistel, R. (2016) Self-organisation of Symbolic Information. Eur. Phys. J. Special Topics, published online: 21 December 2016, DOI: 10.1140/epjst/e2016-60170-9

Feistel, R. and Ebeling, W. Physics of Self-Organization and Evolution, Wiley-VCH, Weinheim, 2011

Feistel, R. and Ebeling, W. (2016) Entropy and the Self-Organization of Information and Value, Entropy 18, 193

Hawking, S. The Universe in a Nutshell, Bantam Books, New York, 2001

Planck, M. Theorie der Wärmestrahlung, 6. Auflage, Johann Ambrosius Barth, Leipzig, 1966

Reading, A. (2006) The Biological Nature of Meaningful Information, Biological Theory, v. 1, No. 3, pp. 243-249

Zhang, B., Cai, Q.-Y., Zhan, M.-S. and You, L. (2013) Information conservation is fundamental: Recovering the lost information in Hawking radiation, Int. J. Mod. Phys. D 22, 1341014

  • Open access
  • 28 Reads
Application of Information Theory Entropy as a Cost Measure in the Automatic Problem Solving

Abstract: We study the relation between Information Theory and Automatic Problem Solving to demonstrate that the Entropy measure can be used as a special case of $-Calculus Cost Functions measure.  We hypothesize that Kolmogorov Complexity (Algorithmic Entropy) can be useful to standardize $-Calculus Search (Algorithm) Cost Function.

Top