Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 86 Reads
Ecological Approach to Theoretical Information Studies

The most significant background for information studies is the interaction between subject and object. Within the framework, information is an ecological process, namely information ecology process. However, due to the influence of the methodology featured with "divide and conquer", information studies has been broken to number of isolated pieces, losing the real, and the most important, significance of information process. Therefore, a new approach  named Ecological Approach (methodology) is presented in the paper. As the consequence of ecological approach application, a number of important results about information studies have been revealed, demonstrating the importance of the new approach (methodology). 

  • Open access
  • 54 Reads
Approach to Ethical Issues Based on Fundamental Informatics: "School Days With a Pig" as a Clue

Approach to Ethical Issues Based on Fundamental Informatics: "School Days With a Pig" as a Clue

1. Introduction - "School Days With a Pig"

From 1990 to 1992, there had been a practical educational trial in an elementary school in Japan which is to raise a young pig for eating. The aim of this trial was to make children realize the importance of life or food, but the method which mixes up companion animals and domestic livestock caused an ethical sensation. The trial was made into a movie named "Buta ga ita kyôshitsu / School Days With a Pig" (2008).
In this article, I would like to discuss "School Days With a Pig" and explore the difference between companion animals and domestic livestock from the viewpoint of fundamental informatics (FI). FI is the information theory proposed by Toru Nishigaki (2004, 2008) based on neo-cybernetics which is a developed version of cybernetics originated by Heinz von Foerster and superimposed on autopoiesis theory of Maturana and Varela.

2. A System Composed of Communications between Children and a Pig

In the beginning of "School Days With a Pig", children treated a pig as a pet, and soon became friends or classmates with the pig. We can say with fair certainty that communications between children and the pig composed a "communication system".
From the viewpoint of FI, the communication system can be seen not only an autopoietic system (APS), but also an upper level HACS (Hierarchical Autonomous Communication System). In this case, the systems of lower level HACS are mental systems of children and a pig. They are also autonomous (autopoietic) systems while they can be seen as heteronomous (allopoietic) systems from the viewpoint of the upper level HACS. This hierarchical view can be derived from a shift of the viewpoint of an observer.
Information transmission can be understood by this HACS model. Although each APS is closed system and cannot "transmit" information in principle, information "seems to be" transmitted as long as an upper level HACS works continuously. In FI, all kinds of information transmission are grasped as this kind of fiction. Therefore, there is no need to say whether the "real" information transmission between children and a pig exist or not.

3. Communication System and Ethical Norms

In FI, ethical norms are understood as a kind of "media" which guides the continuance of communications. For example, academic communications are regulated by ethical norms like "as a scholar, set up a theory for truth, not for money, power, love, etc." Communication systems operate smoothly by virtue of these kinds of media. A sense of ethics is generated when the members of a community share the ethical norms consciously and practically.
Note that information content itself is not primarily connected to ethical problems. From neo-cybernetic point of view, information is self-generated inside a system, for each system is operationally closed. We cannot say whether the information itself is ethical or not as if it exists objectively.
What is important here is the continuance of communications. It is not likely that a communication system composed of communications between children and a pig has as clear ethical norms as those of human-social systems. However, as long as such a communication system continuously operates, we can assume that there are some ethical norms like "behave as a classmate" which work with the system. At least, from the viewpoint of upper level HACS, lower systems can be seen as ethically expected actors that contribute equally to the operation of upper level HACS. In this sense, each of the lower systems can be seen even as a moral actor.

4. Conversion into Non-Communicational Sign Interpreter and Abandonment of Ethical Norms

From the viewpoint of FI, the problem which children faced amidst discussion and criticism is derived from an informatic difference between seeing a grown pig still as a classmate and as food, pork, in daily lives. That is, we usually do not construct a "communication system" together with pork. From informatic viewpoint, pork is regarded just as "signs" and humans are just "sign interpreter". There is no ethical relationship between humans and pork.
Therefore, if children eat the pig as pork, they must abandon the ethical norms related to the pig that they had held until then. They must stop to be a moral actor as classmates and kill the pig that had been an equal existence to them. Children themselves would understand this situation as "unethical", which is even equal to "murder".
Livestock breeder generally do not construct a "communication system like classmates" with livestock. They treat many animals at the same time, they don't give them names, and the period of relationship is limited at a minimum, for about half a year, or at least for one year. In this way, they protect themselves ethically. In contract, the children in this trial communicated to the only one animal, gave it a unique name, and raised it for 900 days long.
In the end, the grown pig was sent to a meat treatment center. Although each APS is closed and the external world can only give some stimuli to trigger the activities of APS, this trial could be observed as particular stimuli because of the destruction of a communicational actor. Because of this particularity, this practice can be criticized as "unethical". However, that is originally intended by teacher-side. This situation or relationship is the "sacrifice" from the viewpoint of children and the "self-victim" from the viewpoint of the pig, and that causes children to have feelings of the unavoidable tragedy concerning life and eating and a kind of sacredness within a living thing which is to be eaten.

5. Conclusion

Considering on ethical issues, being a communicational actor or to be a lower system of HACS should be distinguished from just being a non-communicational sign interpreter. When we see the living things as companion animals, we are in the former style, while we see them as food or domestic livestock, we are in the latter style.
We can assume ethical norms when a higher system of HACS continually works as a communication system. This argument based on informatics can be a starting point for developing new argument on ethical issues, not in terms of difference of intelligence or importance of lives, but in terms of the possibility of construction of communication system.

References

Toru Nishigaki. The Wisdom to Bridge the Gap between Lives and Machines: An introduction to Fundamental Informatics
-. "For the Establishment of Fundamental Informatics on the Basis of Autopoiesis: Consideration on the Concept of Hierarchical Autonomous Systems"
-. "The ethics in Japanese information society: Consideration on Francisco Varela's The Embodied Mind from the perspective of fundamental informatics" Ethics and Information Technology, Springer Netherlands, Volume 8, Number 4, Nov. 2006, pp.237-242

  • Open access
  • 40 Reads
A Model of Complexity for the Legal Domain

Dr. C.N.J. de Vey Mestdagh, University of Groningen, the Netherlands, c.n.j.de.vey.mestdagh@rug.nl

Extended abstract , submission date 15-4-2017

The complexity of the universe can only be defined in terms of the complexity of the perceptual apparatus. The simpler the perceptual apparatus the simpler the universe. The most complex perceptual apparatus must conclude that it is alone in its universe.Abstract

The concept of complexity has been neglected in the legal domain. Both as a qualitative concept that could be used to legally and politically analyse and criticize legal proceedings and as a quantitative concept that could be used to compare, rank, plan and optimize these proceedings. In science the opposite is true. Especially in the field of Algorithmic Information Theory (AIT) the concept of complexity has been scrutinized.

In this paper we first have a quick look at AIT to see what it could mean in this phase of our research in the legal domain. We conclude that the there is a difference between problem complexity and solution complexity. In this paper we therefore start to develop a model of complexity by describing problem complexity in the legal domain. We use a formal model of legal knowledge to derive and describe the parameters for the description of the problem complexity of cases represented in this formal model. Further research will focus on refining and extending the formalization of the model of complexity, the comparison of problem and solution complexity for several legal cases using available algorithms and on the validation of the combined model against concrete cases and lawyers’ and legal organizations’ opinions about their complexity.

1.     Complexity in the legal domain

 

The concept of complexity is hardly developed in the legal domain. Most of the descriptions of concepts related to complexity in legal literature refer to vagueness (intension of concepts), open texture (extension of concepts), sophistication (number of elements and relations) and multiplicity of norms (concurring opinions) - in most cases even without explicit reference to the concept of complexity. Complexity arises in all these cases from the existence and competition of alternative perspectives on legal concepts and legal norms.[1] A complex concept or norm from a scientific point of view is not necessarily a complex concept or norm from a legal point of view. If all parties involved agree, i.e. have or choose the same perspective/opinion - there is no legal complexity, i.e. there is no case/the case is solved. In science more exact definitions of complexity are common and applied. Complexity is associated with i.a. uncertainty, improbability and quantified information content. Despite this discrepancy between the legal domain and the domain of science, in the legal domain complexity is as important as in other knowledge domains. Apart from the obvious human interest of acquiring and propagating knowledge per se, complexity has legal, economic, political and psychological importance. Legal, because a coherent concept of complexity helps to analyse and criticize legal proceedings, in order to clarify them, to enable a justified choice of the level of expertise needed to solve legal cases, and to reduce unnecessary complexity (an example of reducing complexity by compression is given in the next paragraph); Economic, because complexity increases costs and measuring complexity is a precondition for the reduction of these costs (can help in designing effective norms, implementing them effectively, calculating and reducing the costs of legal procedures (cf. White, M.J., 1992), planning the settlement of disputes and other legal proceedings, etc.); Political, because legal complexity can be an instrument to exert power and can increase inequality; Psychological, because complexity increases uncertainty. A validated model of complexity in the legal domain can help to promote these interests. (Cf. Schuck, P.H., 1992; Ruhl, J. B., 1996; Kades, E., 1997).

How to develop a model of complexity in the legal domain (methodology)

In this paper we will try to bridge the gap between the intuitive definitions of complexity in the legal domain and the more exact way of defining complexity in science. We will do that on the basis of a formal model of legal knowledge (the Logic of Reasonable Inferences and its extensions) that we introduced before, that was implemented as the algorithm of the computer program Argumentator and that was empirically validated against a multitude of real life legal cases. The ‘complexities’ of these legal cases proved to be adequately represented in the formal model. In earlier research we actually tested the formal model against 430 cases of which 45 were deemed more complex and 385 less complex by lawyers. A first result was that the algorithm (Argumentator) when provided with case facts and legal knowledge was able to solve 42 of the more complex cases and 383 of the 385 less complex cases in exactly the same way as the legal experts did (including the systematic mistakes made by these experts). A second result was that the algorithm when instructed to do so improved the decisions in 30 (66%) of the 45 more complex cases and in 104 (27%$) of the 385 less complex cases. This result confirms the relative complexity of the first 45 cases. The selection of these 45 cases thus provides us with the material from which criteria for the definition of complexity in this paper could be derived. These criteria are translated to quantitative statements about the formal representation of the cases. Further research will focus on the fine tuning of this quantitative model by comparing its results with new empirical data (new cases and opinions of lawyers about the (subjective) complexity of cases). Finally the ability of the fine-tuned model to predict complexity in new cases will be tested. A positive result can be applied to reduce the aforementioned costs of processing of complex legal knowledge.

 

2.     Models of complexity in science

 

There are many different definitions of complexity in science. The aim of this research is to develop a measure of complexity for formal representations of legal knowledge and their algorithmic implementations. In this abstract we will therefore refer to definitions of complexity from Algorithmic Information Theory (AIT), which studies the complexity of data structures (representations of knowledge in a computer). In AIT the complexity of data structures is equated with its information content. Complexity is postulated to decrease proportionate to the degree of (algorithmic) compressibility of the data structure. To assess the usefulness of AIT for our practical purpose, i.e. the design of a quantitative model of complexity of legal knowledge, we studied some publications from the domain of AIT. We read that complexity is approached as Algorithmic Probability (c.f. Solomonoff’s a priori probability), i.e. the higher the probability that a random computer program outputs an object, the less complex this object is considered to be. We read that complexity is approached as Algorithmic Complexity (c.f. Kolmogorov’s descriptive complexity), i.e. the shorter the code needed to describe an object (string), the less complex this object is considered to be. This is an interesting approach since it seems to offer a concrete measure for the complexity of certain objects (e.g. of legal problems) and it associates with the concept of compressibility which we are able to transpose as simplification (as opposed to sophistication) to the legal domain. Finally we read about Dual Complexity Measures (c.f. Burgin, 2006), which relates AIT to more complex problem structures and distinguishes the complexity of the system described (the problem and its solution) from the complexity of the description (the algorithm used to describe the problem and its solution). A common and essential aspect of these approaches is the compressibility of the object as a measure of its complexity. In all these cases the computer program is considered to be an explanation of a (more or less complex) object (or data structure). My conclusion is that these approaches will be useful when trying to prove certain characteristics of the model of complexity in the legal domain, once developed, but not primarily for the design of the model. We will have to describe the formal model and the algorithm (explanation) first. Just to get a practical insight in the concept of compressibility we did apply the idea of compressibility to some legal cases (see example below). However, many of the characteristics of legal cases that make them ‘complex’ according to lawyers are not directly related to compressibility. Moreover, often the most simple ‘palaver’ in the legal domain is meant to be incomprehensible and therefore misses the (semantic and relational) patterns that are needed to be compressible. Our conclusion is that this concept only partially covers the problem in the legal domain. We are eager to discuss this with our colleagues in the mathematical domain.

An example of operand compression using logical equivalence in the legal domain

Objects regulation U.1. appendix III Decree Indication Chemical Waste reads:

‘Waste products are not considered as chemical waste [cw] if they are objects [o] that have attained the waste phase of their lifecycle [wp], unless:

  1. This has happened before they have reached the user [ru];
  2. This has happened after they have reached the user [ru] and they are
    1. transformers .. [1]  .. 10. mercury thermometers. [10]’

De logical structure of this legal provision is:

             not cw is implied by o and wp and not ((not ru) or (ru and (1 or .. or 10)))

Logically equivalent with this formalisation of the provision is the formula:

             not cw is implied by o and wp and ru and not (1. or .. or 10)

 which is a compression of the original provision.

Interestingly enough the retranslation of this equivalent formula to natural language is:

‘Waste products are not considered as chemical waste if they are objects that have attained the waste phase of their lifecycle and they have reached the user and they are not 1. transformers .. 10. mercury thermometers’.

Although this example illustrates that compression can be beneficial because it improves the readability of the regulation, it does not reduce its actual complexity which - in practice - is related to different opinions about the meaning of concepts like ‘Waste products’.

3.     A formal model of legal knowledge (reasonable inferences)

 

The first step in developing a model of complexity in the legal domain is to describe the formal characteristics of legal knowledge that are related to the essence of complexity in this domain, i.e. the competition of opinions. In a previous publication (de Vey Mestdagh and Burgin, 2015) we introduced the following model that allows for reasoning about (mutually exclusive) alternative opinions and that allows for tagging the alternatives, e.g., describing their identity and context:

Our knowledge of the world is always perspective bound and therefore fundamentally inconsistent, even if we agree to a common perspective, because this agreement is necessarily local and temporal due to the human epistemic condition. The natural inconsistency of our knowledge of the world is particularly manifest in the legal domain (de Vey Mestdagh et al., 2011).

In the legal domain, on the object level (that of case facts and opinions about legal subject behavior), alternative (often contradicting) legal positions compete. All of these positions are a result of reasoning about the facts of the case at hand and a selection of preferred behavioral norms presented as legal rules. At the meta-level meta-positions are used to make a choice for one of the competing positions (the solution of an internal conflict of norms, a successful subject negotiation or mediation, a legal judgement). Such a decision based on positions that are inherently local and temporal is by definition also local and temporal itself. The criteria for this choice are in most cases based on legal principles. We call these legal principles metaprinciples because they are used to evaluate the relations between different positions at the object level.

To formalize this natural characteristic of (legal) knowledge we developed the Logic of Reasonable Inferences (LRI, de Vey Mestdagh et al., 1991). The LRI is a logical variety that handles inconsistency by preserving inconsistent positions and their antecedents using as many independent predicate calculi as there are inconsistent positions (Burgin and de Vey Mestdagh, 2011, 2013). The original LRI was implemented and proved to be effective as a model of and a tool for knowledge processing in the legal domain (de Vey Mestdagh, 1998). In order to be able to make inferences about the relations between different positions (e.g. make local and temporal decisions), labels were added to the LRI. In de Vey Mestdagh et al. 2011 formulas and sets of formulas are named and characterized by labelling them in the form (Ai, Hi, Pi, Ci). These labels are used to define and restrict different possible inference relations (Axioms Ai and Hypotheses Hi, i.e. labeled signed formulas and control labels) and to define and restrict the composition of consistent sets of formulas (Positions Pi and Contexts Ci). Formulas labeled Ai must be part of any position and context and therefore are not (allowed to be) inconsistent. Formulas labeled Hi can only be part of the same position or context if they are mutually consistent. A set of formulas labeled Pi represents a position, i.e. a consistent set of formulas including all Axioms (e.g., a perspective on a world, without inferences about that world). A set of formulas labeled Ci represents a context (a maximal set of consistent formulas within the (sub)domain and their justifications, c.f. the world under consideration). All these labels can be used as predicate variables and if individualized to instantiate predicate variables and consequently as constants (variables as named sets). Certain metacharacteristics of formulas and pairs of formulas were finally described by labels (e.g., metapredicates like Valid, Excludes, Prefer) describing some of their legal source characteristics and their legal relations which could be used to rank the different positions externally. The semantics of these three Predicates (Valid, Exclude and Prefer) are described in de Vey Mestdagh et al. 2011. These three predicates describe the elementary relations between legal positions that are prescribed by the most fundamental sets of legal principles (i.e. principles regarding the legal validity of positions, principles regarding the relative exclusivity of legal positions even if they do not contradict each other and principles regarding the preference of one legal position over another). It was also demonstrated that the LRI allows for reasoning about (mutually exclusive) alternatives.

In (de Vey Mestdagh and Burgin, 2015) we showed that labels can be used formally to describe the ranking process of positions and contexts. With that the thus extended LRI allows for local and temporal decisions for a certain alternative, which means without discarding the non-preferred alternatives like belief revision does and without using the mean of all alternatives like probabilistic logics do. This extended the LRI from a logical variety that could be used to formalize the non-explosive inference of inconsistent contexts (opinions) and naming (the elements of) these contexts to a labeled logical variety, in which tentative decisions can be formally represented by using a labelling that allows for expressing the semantics of the aforementioned meta-predicates and prioritizing (priority labelling). In (de Vey Mestdagh and Burgin, 2015) we illustrated the use of these labels by examples.

In the next paragraph we will use the extended LRI to identify the quantitative parameters of complexity in the legal domain.

 

4.     A formal model of the complexity of legal knowledge (parameters for a reasonable calculation of complexity)

 

The processing of legal knowledge takes place in successive phases. Each phase is characterized by its own perspectives and associated parameters of complexity. Roughly, first the different parties in a legal dispute take their positions, then the positions are confronted and a decision is made and finally the decision is presented. The complexity of the dispute differs from phase to phase. Again roughly, from intermediate (the separate positions), to high (the confrontation and decision making), to low (the decision itself). The separate positions are basically consistent and their contents can each be processed within a separate single logical variety. When the dispute starts complexity increases, because the shared axioms of the dispute have to be calculated and the positions are by definition mutually inconsistent and several calculi within the logical variety have to be used to calculate the joint process of the dispute and to decide between different hypotheses within the dispute. Ultimately the decision principles included in the different positions have to be used to rank the different consistent solutions. The dispute ends by presenting the highest ranking consistent (local and temporal) decision, representing a concurring opinion or a compromise. The complexity of this result is reduced again, because it can be (re)calculated within a single consistent variety. Below we will describe these phases in more detail and the related parameters of complexity in terms of the formal model introduced above.

In a certain case the complexity of the case can be quantified on the basis of the case elements and relations presented by all parties. The processing takes place in five phases:

At the start of legal knowledge processing the case can be described as:

  • A number of sets n (the number of parties involved) of labelled formula Hi,l representing the initial positions of each of the parties in a legal discourse, i.e. hypothesesi of partiesl about the (alleged) facts and applicable norms in a legal case;

The next step is:

  • Determining the intersection between these sets Hi,l which defines Ai representing the agreed case facts and norms and determining the union of all complements which defines Hi; (Ai, Hi) represents the initial case description.

The third step is:

  • Calculating all possible minimal consistent positions Pi that can be inferred from (Ai, Hi) applying a logic e.g. the LRI a logical variety that allows each position to be established by its own calculus. If these calculi differ this adds to the complexity of the problem. In earlier publications we assumed all the calculi to be the same (predicate calculus).

The fourth step is:

  • Calculate all maximal consistent contexts (cf. possible consistent worlds) Ci on the basis of (Ai, Hi, Pi).

The last step is

  • Make a ranking of these contexts on the basis of the application of the metanorms (decision criteria) included in them. A formal description and an example of this process are comprised in (de Vey Mestdagh and Burgin, 2015).

Each step in this process is characterized by its own parameters of complexity. In legal practice different procedures are used to determine and handle (reduce) complexity in these different phases.

In the first phase a direct, static measure of complexity is commonly applied. The number of parties and the number of Hypotheses. This is a rough estimate of the number of different positions (interpretations, perspectives, interests).

In the second phase a direct, relative measure of complexity is commonly applied. The number of Ai and its relative size to Hi. The larger the relative size of Ai the less complex a case is considered to be, because there is supposed to be more consensus.

In the third and fourth phases all positions Pi and contexts Ci are derived:

Given the resulting set of labelled formula (Ai, Hi, Pi, Ci) representing the legal knowledge presented in a certain case, the problem complexity of this set can be defined as follows:

  1. The subset Ai (agreed case facts and norms) is by definition included in each Pi and Ci so its inclusion as such is not a measure for complexity as it reflects absolute consent;
  2. The elements of the subset Hi are by definition not included in each Pi and Ci so the relative size of the inclusion of its elements is a measure of complexity as it reflects relative consent. If there is more conformity there is less complexity. It is even possible that certain elements of the subset Hi are not included in any Pi and Ci . The number of these ‘orphaned’ elements can also contribute to the complexity of a case, because they represent antecedents without consequent or consequents without antecedents (a decision is proposed without justification). Orphaned elements can be the result of incompletely presented positions or - worse - be smoke screens;
  3. The relative size of the fraction of subset Ai in (Ai, Hi) - relative to the fraction of Ai in other cases - is a measure of complexity as it reflects the size of shared (consented) knowledge in a legal dispute. This holds even if the size of Ai is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the Ai into consideration.
  4. The relative size of the fraction of subset Hi in (Ai, Hi) - relative to the Hi in other cases - is a measure of complexity as it reflects the size of disputed knowledge in a legal dispute. This holds even if the size of Hi is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the Hi into consideration.
  5. The relative size of the subset Pi (relative to the Pi in other cases) is a measure of complexity as it reflects the number of different minimal positions that can be taken logically in this specific case. The size of Pi can only be manipulated indirectly (through the respective sizes of Ai and Hi).
  6. The relative size of the subset Ci (relative to the Ci in other cases) is a measure of complexity as it reflects the number of different consistent contexts (possible decisions) that can be distinguished in this specific case.

In the fifth phase ranking of the contexts takes place.

The number of rankings depends on the inclusion of metanorms in the respective contexts. Metanorms that are agreed upon are part of Ai, metanorms that are not agreed upon are part of Hi. The process of applying the metanorms is fully recursive, since the objects of the metanorms are other (meta)norms, which are themselves also part of (Ai, Hi). This means that the determination of the complexity of the application of the metanorms is included in the previous phases. In this phase only the resulting number of rankings is established and can be considered to be an independent measure of complexity.

5.     Validation of the model of complexity

 

The model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph is based on prior theoretical and empirical research into the complexity of legal knowledge (de Vey Mestdagh, 1997, 1998). A total of 430 environmental law cases have been formally represented in the formal model of legal knowledge introduced in paragraph 3 (the extended LRI) and their relative complexity has been established on the basis of legal expert judgements. The opinion of the experts was that 45 cases were of a complex nature and 385 of a less complex (more general) nature. This has been verified by applying an expert system to these cases that was enabled (provided with more complete data and knowledge) to improve on the human judgements in the 430 cases. The test results have shown that in the complex cases 66% of the human judgements were improved by the expert system (of which 20% full revisions), while in the general cases only 27% of the human judgements were improved by the expert system (of which only 2% full revisions). The complex cases are characterized by higher counts of the parameters distinguished in the previous paragraph.

Further validation research is needed to refine the model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph. The relative weight of the counts of the parameters described will be varied against the available dataset of legal cases. The results will also be correlated with other variables that are available to gain further insight in possible parameters of complexity. Examples of these variables are: number of submitted documents, length of procedure, number of appeals, spending power of the parties involved, level of expertise of the lawyers involved, etc.

6.     Conclusion and further research

 

In this paper we have explored the concept of complexity in the legal domain. A first conclusion is that the concept has not been studied explicitly in the legal domain. Only indirectly as a qualitative concept (vagueness, open texture, etc.) and hardly ever as a quantitative concept. However, a quantitative model of complexity in the legal domain has - apart from its scientific meaning per se – legal, economic and political implications. It will allow us to improve the quality and efficiency of legal proceedings. Algorithmic Information Theory offers several approaches to the quantification of complexity that inspired the approach chosen in this paper. It induced the thought that a distinction between problem complexity and resolution complexity is necessary and that a model of complexity based on the formal representation of legal knowledge should be the first step in developing a model of complexity in the legal domain. In this paper we give a description of a formal representation of legal knowledge (the extended Logic of Reasonable Inferences) and we describe the quantitative parameters of complexity for this model. The result we would like to call Reasonable Complexity, because it is based on the LRI and because it inherits its relative, perspective bound character. Complexity is specifically relative to the number of perspectives combined in the knowledge under consideration. Further research will focus on extending the model of complexity to resolution complexity, using - amongst others – available algorithms (i.a. Argumentator, a computer program we developed to implement the LRI). It will also use an available dataset of 430 environmental law cases that have been described and analysed before and that have already been represented in Argumentator.

References

Burgin, M.: Super-Recursive Algorithms, Springer Science & Business Media, 2006

Burgin, M., de Vey Mestdagh, C.N.J.: The Representation of Inconsistent Knowledge in Advanced Knowledge Based Systems. In: Andreas Koenig, Andreas Dengel, Knut Hinkelmann, Koichi Kise, Robert J. Howlett, Lakhmi C. Jain (eds.). Knowlege-Based and Intelligent Information and Engineering Systems, vol. 2, pp. 524-537. Springer Verlag, ISBN 978-3-642-23862-8, 2011

Burgin, M., de Vey Mestdagh, C.N.J.: Consistent structuring of inconsistent knowledge. In: J. of Intelligent Information Systems, pp 1-24, , Springer US, September 2013

Dworking, R.: Law's Empire, Cambridge, Mass., Belknap Press, 1986

Hart, H.L.A.: The Concept of Law, New York, Oxford University Press, 1994

Kades, E.: The Laws of Complexity & the Complexity of Laws: The Implications of Computational Complexity Theory for the Law (1997). Faculty Publications. Paper 646. http://scholarship.law.wm.edu/facpubs/646

Ruhl, J. B.: Complexity Theory as a Paradigm for the Dynamical Law-and-Society System: A Wake-UpCall for Legal Reductionism and the Modern Administrative State. Duke Law Journal, Vol. 45, No. 5 (Mar., 1996), pp. 849-928

Schuck, Peter H.: Legal Complexity: Some Causes, Consequences, and Cures. Duke Law Journal, Vol. 42, No. 1 (Oct., 1992), pp. 1-52

Vey Mestdagh, C.N.J. de, Verwaard, W., Hoepman, J.H.: The Logic of Reasonable Inferences. In: Breuker, J.A., Mulder, R.V. de, Hage, J.C. (eds) Legal Knowledge Based Systems, Model-based legal reasoning, Proc. of the 4th annual JURIX Conf. on Legal Knowledge Based Systems, pp. 60-76. Vermande, Lelystad, 1991

Vey Mestdagh, C.N.J. de.: Juridische Kennissystemen, Rekentuig of Rekenmeester?, Het onderbrengen van juridische kennis in een expertsysteem voor het milieuvergunningenrecht (proefschrift), 400 pp., serie Informatica en Recht, nr. 18, Kluwer, Deventer, 1997, ISBN 90 268 3146 3;

Vey Mestdagh, C.N.J. de. Legal Expert Systems. Experts or Expedients? In: Ciampi, C., E. Marinai (eds.), The Law in the Information Society, Conference Proceedings on CD-Rom, Istituto per la documentazione giuridica del Centro Nazionale delle Richerche, Firenze, 2-5 December 1998, 8 pp.

Vey Mestdagh, C.N.J. de, Hoepman, J.H.: Inconsistent Knowledge as a Natural Phenomenon: The Ranking of Reasonable Inferences as a Computational Approach to Naturally Inconsistent (Legal) Theories. In: Dodig-Crnkovic, G. & Burgin, M. (Eds.), Information and Computation (pp. 439-476). New Jersey: World Scientific, 2011

Vey Mestdagh, C.N.J. de, Burgin, M.: Reasoning and Decision Making in an Inconsistent World: Labeled Logical Varieties as a Tool for Inconsistency Robustness. In: R. Neves-Silva, L. C. Jain, & R. J. Howlett (Eds.), Intelligent Decision Technologies. (pp. 411-438). Smart Innovation, Systems and Technologies; Vol. 39. Springer, 2015

White, M.J.: Legal Complexity and Lawyers’ Benefit from Litigation. International Review of Law and Economics (1992) 12, 381-395.

[1] Cf. H.L.A., Hart, who uses the concept of discretion to characterize hard (complex) cases, in The Concept of Law, New York, Oxford University Press, 1994; and R. Dworking, who distinguishes easy from hard cases using the concept of principled interpretation, in Law's Empire, Cambridge, Mass., Belknap Press, 1986; Although fundamentally differing in their opinion about the sources of the decision criteria, they both acknowledge the alternative perspectives that play a role in deciding complex cases (the judge’s discretion in the light of the parties alternative perspectives vs. the judges principled interpretation in the context of the parties alternative perspectives).

  • Open access
  • 105 Reads
Cuts, Qubits, and Information

In his search for the ‘essence’ of continuity, Richard Dedekind (1872) discovered the notion of cut. Epistemologically speaking, a cut produces a separation of a simply infinite system into two parts (Stücke) such that all the elements of one part are screened off all the elements of the other. The distinct continuity of a two-state quantum system is encapsulated in the notion of qubit, the basic ‘unit’ of quantum information. A qubit secures an infinite amount of information, which, however, appears to be only penetrable through ‘sections’ of classical bits. Whereas Dedekind’s cuts dwell on the discrete of number theory, the theory of nature is primarily concerned with continuous transformations. In contrast with Dedekind’s line of thought, could the notion of information be derived from a ‘principle’ of continuity? 

1. The ‘Phenomenon’ of the Cut

Dedekind’s main concern was to clean the science of numbers from foreign notions, such as measurable quantities or geometrical evidence. Hence, the real challenge was to extract a purely arithmetic and perfectly rigorous definition of the essence of continuity from the discrete of rational numbers.

The vexata quaestio of “continuity and irrational numbers” originated with the Pythagorean discovery of incommensurable ratios. In the eyes of Pythagoreans, however, it was the divergence between the harmony of geometrical forms and the “atomism” of numbers to be disturbing. The early Pythagoreans “did not really distinguish numbers from geometrical dots. Geometrically, then, a number was an extended point or a very small sphere” (Kline 1972: 29). By contrast, in Dedekind’s view, “numbers are free creations of the human mind; they serve as a means of apprehending more easily and more sharply the difference of things” (1888: 791).

Amazingly, Dedekind extracted the essence of continuity from cuts. Considering that every point produces a separation of the straight line into two parts such that every point of one part lies to the left of every point of the other, Dedekind recognized the special character of continuity in the converse, i.e. in the following principle:

  • If all points of the straight line fall into two classes such that every point of the first class (Klasse) [A1] lies to the left of every point of the second class [A2], then there exists one and only one point which produces this division of all points into two classes, this severing of the straight line into two portions. (Dedekind 1872: 771)

What is precisely determined is primarily the division itself [1]. Hence, whenever we have a cut (A1, A2) produced by no rational number, we can create a new number, an irrational number, which we regard as completely defined by this cut (Dedekind 1872: 773). So the system of real numbers is obtained by filling up the gaps in the domain of rational numbers and making it continuous; “taking the object that fills each gap to be essentially the gap itself” (Stillwell 2010).

Dedekind’s notion of ‘cut’ raises distinguishability on to a higher level – from (integer) numbers to classes, from elements to properties – taking into account not solely the relations of one individual number to another, but also the relations between (infinite) sets of elements. As Dedekind emphasized, if “one regards the irrational number as the ratio of two measurable quantities,” then this manner of determining it is already set forth in the clearest possible way by Euclid. But a presentation “in which the phenomenon of the cut in its logical purity is not even mentioned, has no similarity whatever to mine, inasmuch as it resorts at once to the existence of a measurable quantity, a notion which (...) I wholly reject” (Dedekind 1888: 794).

2. From Atoms to Qubits

While Dedekind’s axiom of continuity guarantees the logical purity of real numbers, physics needs measurable quantities to unravel the continuity of nature into elements. It is noteworthy that the Pythagorean arithmetical atomism as well as the Democritean physical atomism were stuck on ‘continuity.’ Since the harmony of (natural) forms ought to be expressed by (whole) numbers, there was no way to fill the gap between the finite and the infinite. If the discovery of incommensurable ratios meant the departure of geometric constructions from arithmetic operations, Zeno’s paradoxes made it clear that motion is not attainable by summing up an infinite series of discrete states.

It is a great achievement of quantum theory to have read the divide between measurable quantities and continuous transformations as a dialectic contrast and to have made it the source of physical meaning.

 2.1.  Ghost fields

Interestingly, a ‘quantum Zeno effect’ was first noticed by John von Neumann (1932): a sequence of measurements frequently performed on a quantum system can slow down or even halt the evolution of the state. As a consequence of the quantum Zeno effect, in a classical interference experiment [2], when a photon emerging from a Mach-Zehder interferometer informs that a ‘which-path’ measurement was set on its way, the probability that no measurement was actually performed (i.e., no photon-observer interaction took place) could be stretched to the limit of 1.

The debate on the impact of ‘null-result’ measurements on the behaviour of quantum systems or, more generally, on the nature of quantum interference urged the search for a more ‘sensible’ description of physical reality. It is well known that Einstein, Podolsky, Rosen’s celebrated essay (1936) was supposed to highlight the conflict between the completeness of the quantum physical description of physical reality and Heisenberg’s uncertainty relations, but in fact it drew attention to a form of ‘non-locality’ underlying quantum physics. When measurements are performed on certain pairs of particles, the values of the same physical quantity for the two separated particles appear instantly correlated. Seemingly, the failing attempts to find a reasonably ‘realistic’ (via experiment) explanation of quantum interference effects led Einstein to coin the term ‘ghost fields’ (Gespensterfelder ) for quantum waves.

2.2.  Perspectives on distinguishability

Rather than questioning non-locality, quantum correlations enlighten a notion of non-separability, called ‘entanglement.’ As Schrödinger (1935) observed, two quantum systems interact in a way such that only the properties of the pair are defined. Consider for instance the spin components. Although any individual particle holds a set of well-defined values, once two particles get entangled in a pair, the spin of one particle and that of the other go in the same direction or in opposite directions; ‘being the same’ or ‘being opposed’ are clearly properties concerning two objects. Accordingly, quantum theory forges pure relational properties, which do not work for individual systems.

As for a measurement on a single particle, it also involves a correlation be- tween two ‘subjects’: the system and the observer. Any physical system numbers a set of characteristic ‘potential’ features. To become ‘temporarily real’ (observable ), any of these features is bound to a feasible system-observer interaction. In this perspective, any measurement brings about a special ‘relational property’ of the pair (cf. Rovelli 1996). To the extent that measurement can be viewed as an interaction where a certain perspective on one observable determines the distinct value to be ascribed to the observable, it requires to refine the very notion of ‘distinguishability.’ In order to satisfy this requirement, quantum theory introduces complex probability amplitudes, which size the angular separation between alternative possibilities and must be squared to generate probabilities.

Thus, the classical tenet that measurement unveils a property of the system must be revised. It is wrong to attribute a feature to a quantum system until a measurement has brought it to a close by an act of irreversible amplification (cf. Wheeler 1982).

2.3.  TheElementary Quantum Phenomenon”

“One who comes from an older time and is accustomed to the picture of the universe as a machine built out of ‘atoms’ is not only baffled but put off when he reads [...] Leibniz’s conception of the ultimate building unit, the monad” (Wheeler 1982: 560). What Leibniz wrote about the “monad”, Wheeler observed, is more relevant to what he called “quantum phenomenon” than to any- thing one has ever called an ‘atom’. The very word ‘phenomenon’, according to Wheeler, is the result of a long lasting debate between Bohr and Einstein about the logical self-consistency of quantum theory and its implications for reality : “No elementary phenomenon is a phenomenon until it is a registered (observed) phenomenon.” But Leibniz’s monad has neither extension, nor shape, hence it is not observable.

A monad is a simple substance and a unity of perceptions. As a unity of perceptions, it contains the whole universe. As a simple substance, it is not a ‘tangible’ thing, but rather the ‘perceiving faculty’ itself. Indeed perception performs the inner constant change, and also, as a function of correlation, enables monads to express each other:

  • This interconnection or accommodation of all created things to each other, and each to all the others, brings it about that each simple substance has relations that express all the others, and consequently, that each simple substance is a perpetual living mirror of the universe. (Monadology 56; Leibniz 1989)

How to draw ‘meaningful’ perceptions – i.e., natural phenomena – from an impenetrable faculty of perceiving, from the infinite unity of each monad? More than to the ‘quantum phenomenon’, the characteristic features of the monad apply to the qubit.

Like a monad, a qubit, which is the basic unit of quantum information theory, involves an infinite multitude. As a two-state quantum system, it can be prepared in a coherent superposition of two distinguishable states. It follows that there is no way to extract information from qubits other than by measuring them with ‘yes-no’ questions.

3. The Essence of Information

In the ‘artificial’ construction of a theoretical model – be it the Euclidean geometry or the universal computer – one starts with distinct elements and ponders how to achieve the connecting structure. In the attempt to figure out the intelligence of nature, one starts with the structure and tries to analyze it into elements.   At its heart, stands the ultimate inner principle of ‘existence’: a principle of metamorphosis.

Reversing the Euclidean perspective, in his search for a general geometric characteristic, Leibniz pursued the ‘inner principle’ of geometry:

  • Imagine taking two points in space, hence conceiving the indeterminate straight line through them; one thing is that each point is regarded individually as single, another thing is that both are regarded as simultaneously existing; besides the two points, something else is needed for seeing them as co-existent in their respective positions. When we consider one of the two points as if we took its position and looked at the other (point), what the mind determines is called direction. (Leibniz 1995: 278)

Time enters geometry and generates the concept of space: “Space is the continuity in the ‘order of co-existence’ according to which, given the co-existence relation in the present and the law of changes (lege mutationis ), the co-existence relation in any given time can be defined.”

For Leibniz, the whole universe is encapsulated in every monad from the very beginning, and the simple substance of monad coincides with the continuity principle of the disclosure of itself. Therefore, every monad must be also endowed with an original faculty of representing, which makes it able to match the variety of phenomena. To deliver ‘information’ about the universe, perceptions must become observable in the guise of phenomena.

Now, like a straight line, each perception needs two co-existent elements to be determined by an external observer. Thus, all measurable quantities (i.e., the basic constituents of physics) must come into existence as pairs. This imposes one constraint on natural phenomena: given the infinity of perceptions, the number of natural elements must be the logarithm to the base two of that infinity. In this sense, Pythagoras correctly drew the geometry of nature from whole numbers. On the other hand, Leibniz insightfully saw the infinite multitude of natural forms as related to the different points of view of each monad.

In Leibniz’s world, however, there is no conflict between the continuity of the simple substance and the distinguishability of perceptions, because each monad is a “living mirror of the universe.” By contrast, in the (quantum) physical world, distinct points of view influence the spectral decomposition itself. The ‘substance’ of nature is captured by a unitary transformation, but physical knowledge rests upon cross-ratios between distinct perceptions. Thus, the essence of information springs from correlations.

References

  • R. Dedekind (1872), Continuity and Irrational Numbers, (Ewald 1996: 765-779)
  • R. Dedekind (1888), Was sind und was sollen die Zahlen, (Ewald 1996: 790-833)
  • A. Einstein, B. Podolski, N. Rosen (1935), Can Quantum Mechanical Description of Physical Reality Be Considered Complete?, Physical Review, 47: 777-780.
  • W. Ewald (ed.) (1996) From Kant to Hilbert: A source book in the foundations of mathematics. Vol. 2 (Oxford Univ. Press, Oxford)
  • M. Kline (1972) Mathematical Thought from Ancient to Modern Times. Vol.   1 (Oxford University Press, Oxford)
  • W. Leibniz (1989) Philosophical Essays, eds. Ariew, R. and Garber, D. (Hackett Publishing Company, Indianapolis & Cambridge)
  • W. Leibniz (1995) La caractétistique géométrique, eds. Echeverría, J. and Parmentier, M. (Librairie philosophique J. Vrin, Paris)
  • C. Rovelli (1996), Relational Quantum Mechanics, Int. J. Theor. Phys., 35: 1637-1678
  • E. Schördinger (1935), Die gegenwrtige Situation in der Quantenmechanik, Naturwissenschaften, 23: 807-812.
  • J. Stillwell (2010) Roads to Infinity. (A. K. Peters, Natick, MA)
  • J. von Neumann (1932) Mathematische Grundlagen der Quantenmechanik. (Springer, Berlin)
  • J. A. Wheeler (1982), The Computer and the Universe, Int. J. Theor. Phys., 21: 557-572

Notes

[1] Any separation of the domain of rational numbers into two classes, A1 and A2 , such that every number of one class is less than every number of the other, defines a real number.

[2] Think of photons set going through a Mach-Zehnder interferometer. After encountering the first beam-splitter each photon can choose between two mutually exclusive paths to reach the second beam-splitter.

  • Open access
  • 55 Reads
Information — Semantic Definition or Physical Entity?

Although the term "information" is discussed in a lot of publications since decades, a generally accepted definition of information does not exist. The prevailing discourse focuses on semantic and technical definitions, but with the rising vision of quantum computing also physicists are more interested in understanding information. But still today semantic definitions seem to be stronger than physical concepts. The reason might be the experience, that information pervades all scales, from the quantum level to a railway signal. This fact can be addressed easier semantically, than by a physical entity. In a similar case like the entity "energy" it took nearly two centuries to receive a fundamental and finally accepted definition.
Derived from that experience with the term "energy", the present approach is looking for a definition of the smallest part of information. Based on such a simple entity, formalisms are needed to describe more complex information structures within higher scales. Finally such a definition has to be compatible to semantic definitions of information. The quest for the smallest information starts with an analogy to the pixel.
Today’s ubiquitous concept of a pixel is based on its technical use to characterize visual media technologies like scanners, displays and printers by their capability to represent information. The definition of a pixel addresses it’s to key features: (1) It is defined as smallest addressable piece of information in that specific context of technology. (2) As additional requirement by the visual application a pixel has to be specified as small as it remains indistinguishable by the human eye. For the eye the single pixel does not exist, but an observer will be able to recognize structures of multiple pixels. And a wide variety of different structures with different functions may arise out of these individually invisible pixels. For this artificial and fully controllable, but real system of pixels we can discuss basic features of emergence. Focus of these considerations is not the semantic understanding of pattern generation, but the characterization of the process. The example of pixels offers an opportunity for an abstract formulation of emergence and its relation to information.
The observations of the interdependencies at this macroscopic emergent situation can be used for the further argumentation by reducing the size of the pixel. Scaling down though different levels of so called “mega evolution”, we finally can look out for candidates for the smallest information.
The current debate among physicists offers quantum dots, the Minkowski space-time cell and black holes as smallest physical entities. Also if keeping the Bremermann limit and Landauer's principle in mind, a kind of an ontological gap remains if these smallest physical phenomena are taken as physical pixels, representing something like pure "information".
One hypothesis to bridge this ontological gap will be presented. Based on the emergent understanding of information it is assumed that there must be a smaller informational entity below the physical limit. The ontological, or better mathematical argument for this assumption is discussed. In an admittedly hypothetical manner we can define in this sub-physical approach the smallest form of information –the initial pixel– and an elementary emergent process. A modern understanding in mathematics inspirits the idea for this approach, as it can be found in the Homology Type Theory (HoTT). There mathematics is not a fixed, steady logical structure, only explore able by a human brain. Mathematics itself might be an infinite process, independently from human understanding. And information has to be identified as a concept, which has to be a constitutive element of mathematics.
This paper is not able to proof this hypothesis by physical arguments; this will be a task for further investigations. But it offers a feasible explanation for the semantic part of information and a linkage from the very basic but simple definition of information towards complex appearances.
The idea of this paper is not to offer a final solution but to trigger a discussion about further needs to receive a clarification of the obstacles.

  • Open access
  • 45 Reads
The Difference That Makes a Difference for the Conceptualization of Information

Information is a subject of multiple efforts of conceptualization leading to controversies. Not frequently sufficient effort is made to formulate the concept of information in a way leading to its formal mathematical theory. Discussions of conceptualizations of information usually are focusing on the articulation of definitions, but not on their consequences for theoretical studies. This paper compares two conceptualizations of information exploring their mathematical theories. One of these concepts and its mathematical theory were introduced in earlier publications of the author. Information was defined in terms of the opposition of one and many and its theory was formulated in terms of closure spaces. The other concept of information was formulated in a rather open-ended way by Bateson as “any difference that makes a difference”. There are some similarities between Bateson’s concept of information and that of MacKay. In this paper a mathematical theory is formulated for this alternative approach to information founded on the concept of a difference in terms of generalized orthogonality relation. Finally, the mathematical formalisms for both approaches are compared and related. In conclusion of that comparison the approach to information founded on the concept of difference is a special case for the approach based on one-and-many opposition.

  • Open access
  • 52 Reads
The General Theory of Information as a Unifying Factor for Information Studies: The noble eight-fold path
Published: 09 June 2017 by MDPI in DIGITALISATION FOR A SUSTAINABLE SOCIETY session KEYNOTES

The General Theory of Information as a Unifying Factor for Information Studies

The noble eight-fold path

Mark Burgin

University of California, Los Angeles,

520 Portola Plaza, Los Angeles, CA 90095, USA

 

 

Abstract: We analyze advantages and new opportunities, which the general theory of information (GTI) provides for information studies.

  1. Introduction

The general theory of information (GTI) is a novel approach, which offers powerful tools for all areas of information studies. The theory has three components:

v  The axiomatic foundations

v   The mathematical core

v  The functional hull

In Section 2, we give a very brief exposition of the axiomatic foundations of the general theory of information. The mathematical core is presented in (Burgin, 1997; 2010; 2011; 2011b; 2011c; 2014) and some other publications. In Section 3, we demonstrate advantages and new opportunities, which the general theory of information (GTI) provides for science in general and information studies, in particular.

 

  1. Axiomatic foundations of the general theory of information

The axiomatic foundation consists of principles, postulates and axioms of the general theory of information.

²  Principles describe and explain the essence and main regularities of the information terrain.

²  Postulates are formalized representations of principles.

²  Axioms describe mathematical and operational structures used in the general theory of information.

There are two classes of principles:

Ontological principles explain the essence of information as a natural and artificial phenomenon.

Axiological principles explain how to evaluate information and what measures of information are necessary.

At first, we consider ontological principles.

There are three groups of ontological principles:

à        Substantial ontological principles [O1, O2 and its modifications O2g, O2a, O2c] define information.

à        Existential ontological principles [O3, O4, O7] describe how information exists in the physical world

à        Dynamical ontological principles [O5, O6] show how information functions

Ontological Principle O1 (the Locality Principle). It is necessary to separate information in general from information (or a portion of information) for a system R.

In other words, empirically, it is possible to speak only about information (or a portion of information) for a system. This principle separates local and global approaches to information definition, i.e., in what context information is defined.

The Locality Principle explicates an important property of information, but says nothing what information is. The essence of information is described by the second ontological principle, which has several forms.

Ontological Principle O2 (the General Transformation Principle).  In a broad sense, information for a system R is a capacity to cause changes in the system R.

Thus, we may understand information in a broad sense as a capacity (ability or potency) of things, both material and abstract, to change other things. Information exists in the form of portions of information.

The Ontological Principle O2 is fundamental as it intimately links information with time. Changes to R, when they occur by reception of information, are defined here to be the result of a causal process. Causality necessarily implies that the related effect happens after its cause. The Ontological Principle O2 leaves open the question whether the potential causal changes may or must be irreversible.

The Ontological Principle O2 unifies dynamic aspects of reality because information in a broad sense projected onto three primal components of reality – physical reality, mental reality and structural reality - amalgamates the conceptions of information, physical energy and mental energy with its special form, psychic energy, in one comprehensive concept.

Being extremely wide-ranging, this definition supplies meaning and explanation to the conjecture of von Weizsäcker that energy might in the end turn out to be information, as well as to the aphorism of Wheeler  It from Bit and to the statement of Smolin that the three-dimensional energetic world is the flow of information.

Mental energy is considered as a mood, ability or willingness to engage in some mental work and is often related to the activation level of the mind. The concept stems from an "energy of the soul" introduced by Henry More in his 1642 Psychodia platonica.

Psychic energy has become an essential component of several psychological theories. At first, the concept of psychic energy, also called psychological energy, was developed in the field of psychodynamics by German scientist Ernst Wilhelm von Brücke (1819-1892). Then it was further developed by his student Sigmund Freud (1856-1939) in psychoanalysis. Next step in its development was done by his student Carl Gustav Jung (1875-1961).

Mental energy is innate for any mentality, while psychic energy is related only to the human psyche.

The next principle is

Ontological Principle O2g (the Relativized Transformation Principle). Information for a system R relative to the infological system IF(R) is a capacity to cause changes in the system IF(R).

The concept of infological system plays the role of a free parameter in the general theory of information, providing for representation of different kinds and types of information in this theory. That is why the concept of infological system, in general, should not be limited by boundaries of exact definitions. A free parameter must really be free. Identifying an infological system IF(R) of a system R, we can define different kinds and types of information.

Here are examples from popular information  theories:

In Shannon’s information theory (or more exactly, a theory of communication), information is treated as elimination of uncertainty, i.e., as a definite change in the knowledge system of the receptor of information. In the semantic information theory of Bar‑Hillel and Carnap, information causes change in knowledge about the real state of a system under consideration. In algorithmic information theory, information about a constructive object, e.g., a string of symbols, is characterized by construction of this object, while information in one object about another one reflects changes  in the systems of construction algorithms.

Taking a physical system D as the infological system and allow only for physical changes, we see that information with respect to D coincides with (physical) energy.

Taking a mental system B as the infological system and considering only mental changes, information with respect to B coincides with mental energy.

Taking a cognitive system C as the infological system and considering only structural changes, information with respect to B coincides with information per se.

As a model example of an infological system IF(R) of an intelligent system R, we take the system of knowledge of R. In cybernetics, it is called the thesaurus Th(R) of the system R. Another example of an infological system is the memory of a computer. Such a memory is a place in which data and programs are stored and is a complex system of diverse components and processes.

The concept of an infological system shows that not only living beings receive and process information. For instance, it is natural to treat the memory of a computer as an infological system. Then what changes this memory is information for the computer.

Ontological Principle O2a (the Special Transformation Principle). Information in the strict sense or proper information or, simply, information for a system R, is a capacity to change structural infological elements from an infological system IF(R) of the system R

There is no exact definition of infological elements although there are various entities that are naturally considered as infological elements as they allow one to build theories of information that inherit conventional meanings of the word information. For instance, knowledge, data, images, algorithms, procedures, scenarios, ideas, values, goals, ideals, fantasies, abstractions, beliefs, and similar objects are standard examples of infological elements. Note that all these elements are structures and not physical things. That is why, we use structural infological elements per se for identifying information in the strict sense.

This allows giving an esthetically eye-catching description of information:

Information is energy in the Platonic World of Ideas

Ontological Principle O2c (the Cognitive Transformation Principle). Cognitive information for a system R, is a capacity to cause changes in the cognitive infological system IFC(R) of the system R

An infological system IF(R) of the system R is called cognitive if IF(R) contains (stores) elements or constituents of cognition, such as knowledge, data, ideas, fantasies, abstractions, beliefs, etc. A cognitive infological system of a system R is denoted by CIF(R) and is related to cognitive information.

After we outlined (defined) the concept information, let us consider how information exists in the physical world.

Ontological Principle O3 (the Embodiment Principle). For any portion of information I, there is always a carrier C of this portion of information for a system R.

The substance C that is a carrier of the portion of information I is called the physical, or material, carrier of I.

Ontological Principle O4 (the Representability Principle). For any portion of information I, there is always a representation C of this portion of information for a system R. 

Ontological Principle O5 (the Interaction Principle). A transaction/transition/transmission of information goes on only in some interaction of C with R.

Ontological Principle O6 (the Actuality Principle).  A system R accepts a portion of information I only if the transaction/transition/transmission causes corresponding transformations in R.

Ontological Principle O7 (the Multiplicity Principle). One and the same carrier C can contain different portions of information for one and the same system R.

Now we give a list of axiological principles.

Axiological Principle A1. A measure of information I for a system R is some measure of changes caused by I in R (for information in the strict sense, in IF(R)).

Note that it is possible to take the quantity of resources used for inflicting changes caused by information I in a system R as a measure of these changes and consequently, as a measure of information I.

Axiological Principle A2. One carrier C can contain different portions of information for a given system R.

Axiological Principle A3. According to time orientation, there are three types of measures of information: 1) potential or perspective; 2) existential or synchronic; 3) actual or retrospective.

Axiological Principle A4. According to the scale of measurement, there are two groups, each of which contains three types of measures of information: (1) qualitative measures, which are divided into descriptive, operational and representational measures, and (2) quantitative measures, which are divided into numerical, comparative and splitting measures.

Axiological Principle A5. According to spatial orientation, there are three types of measures of information: external, intermediate, and internal.

Axiological Principle A6. Information I, which is transmitted from a carrier C to a system R, depends on interaction between C and R.

Axiological Principle A7. Measure of information transmission from a carrier C to a system R reflects a relation (like ratio, difference etc.) between measures of information that is admitted by the system R in the process of transmission and information that is presented by C in the same process.

 

  1. The general theory of information as a unifying factor for information studies

First, the general theory of information gives a flexible, efficient and all-encompassing definition of information. In contrast to other definitions and descriptions used before, this definition is parametric allowing specification of information in general, as well as information in any domain of nature, society and technology.

Even more, the new definition taken in broad context make it possible to unite the conceptions of information, physical energy and psychic energy in one comprehensive concept. Being extremely wide-ranging, this definition supplies meaning and explanation to the conjecture of von Weizsäcker that energy might in the end turn out to be information as well as to the aphorism of Wheeler  It from Bit.

This shows that the general theory of information provides means for a synthesis of physics, psychology and information science playing the role of a metatheory for these scientific areas.

At the same time, the new definition characterizes proper information when the general concept is specified by additional principles. The construction of an infological system allows researchers to exactly delineate information in the area of their studies.

Second, the general theory of information explains and makes available constructive tools for discerning information, measures of information, information representations and carriers of information. For instance, taking a letter written on a piece of paper, we see that the paper is the carrier of information, the text on it is the representation of the information contained in this text and it is possible to measure the quantity of this information using Shannon entropy or algorithmic complexity.

Third, the general theory of information provides efficient mathematical models. There are models of three types: information algebras, operator models based on functional analysis and operator models based on category theory. Functional representations of information dynamics preserve internal structures of information spaces associated with infological systems as their state or phase spaces. Categorical representations of information dynamics display external structures of information spaces associated with infological systems. Algebraic representations of information dynamics maintain intermediate structures of information spaces. These models allow researchers to discover intrinsic properties of information.

Fourth, the general theory of information supplies methodological and theoretical tools for the development of measurement and evaluation technologies in information studies and information technology. Moreover, any science needs theoretical and practical means for making grounded observations and measurements. Different researchers in information theory have developed many methods and measures. The most popular of them are Shannon’s entropy and algorithmic complexity. The general theory of information unifies all these approaches opening new possibilities for building efficient methods and measures in areas where the currently used methods and measures are not applicable.

Fifth, the general theory of information offers organization and structuration of the system of all existing information theories.

However, it is important to understand that this unifying feature and all advantages of the general theory of information do not exclude necessity in special theories of information, which being more specific, can go deeper in their investigation of properties of information and information processes in various areas. For instance, syntactic information theories, such as Shannon’s theory, are very useful in the area of communication. Algorithmic information theories, such as the theory of Kolmogorov complexity, are very useful in the area of automata, computation and algorithms. There are also semantic, pragmatic, economic, semiotic and other special information theories, each of which is directed at investigation of specific properties of information, information processes and systems.

Sixth, the general theory of information explicates the relevant relations between information, knowledge and data demonstrating that while knowledge and data are objects of the same type with knowledge being more advanced than data, information has a different type. These relations are expressed by the Knowledge-Information-Matter-Energy Square:

information is related to knowledge (data) as energy is related to matter

In particular, it is possible to transform knowledge or data into information as we can transform matter into energy.

Seventh, the general theory of information rigorously represents static, dynamic and functional aspects and features of information. These features are modeled and explored by algebraic, topological and analytical structures of operators in functional spaces and functors in the categorical setting forming information algebras, calculi and topological spaces.

Eighth, the general theory of information explicates and elucidates the role of information in nature, cognition, society and technology clarifying important ontological, epistemological and sociological issues. For instance, this theory explains why popular but not exact and sometimes incorrect publications contain more information for people in general than advanced scientific works with outstanding results.

 

Selected bibliography

  1. Burgin, M. Theory of Information: Fundamentality, Diversity and Unification, World Scientific, New York/London/Singapore, 2010
  2. Burgin, M. (1997) Information Algebras, Control Systems and Machines, No. 6, pp. 5-16 (in Russian) 
  3. Burgin, M. (2003) Information Theory: A Multifaceted Model of Information, Entropy, v. 5, No. 2, pp. 146-160
  4. Burgin, M. (2004) Data, Information, and Knowledge, Information, v. 7, No.1, pp. 47-57
  5. Burgin, M. (2010) Information Operators in Categorical Information Spaces, Information, v. 1, No.1, pp. 119 - 152
  6. Burgin, M.( 2011) Information in the Structure of the World, Information: Theories & Applications, v.18, No. 1, pp. 16 - 32
  7. Burgin, M. (2011a) Information: Concept Clarification and Theoretical Representation, TripleC, v. 9, No.2, pp. 347-357 (http://triplec.uti.at)
  8. Burgin, M. (2011b) Epistemic Information in Stratified M-Spaces, Information, v. 2, No.2, pp. 697 - 726
  9. Burgin, M. (2011c) Information Dynamics in a Categorical Setting, in Information and Computation, World Scientific, New York/London/Singapore, pp. 35 - 78
  10. Burgin, M. (2013) Evolutionary Information Theory, Information, v. 4, No.2, 2013, pp. 224 – 268
  11. Burgin, M. (2014) Weighted E-Spaces and Epistemic Information Operators, Information, v. 5, No. 3, pp.357 - 388
  12. Chaitin, G.J. (1977) Algorithmic information theory, IBM Journal of Research and Development, v.21, No. 4, pp. 350-359
  13. Fisher, R. A. Contributions to Mathematical Statistics, New York, Wiley, 1950
  14. Frieden, R.B. Physics from Fisher Information, Cambridge University Press, Cambridge, 1998
  15. Shannon, C.E. (1948) The Mathematical Theory of Communication, Bell System Technical Journal, v. 27, No. 1, pp. 379-423;  No. 3, pp.623-656
  16. Smolin, L. The Life of the Cosmos, Oxford University Press, Oxford/ New York, 1999
  17. von Bayer, H.C. Information: The New Language of Science, Harvard University Press, Harvard, 2004
  18. von Weizsäcker, C.F. Die Einheit der Natur, Deutscher Taschenbuch Verlag, Munich, Germany, 1974
  19. Wheeler, J.A. (1990) Information, Physics, Quantum: The Search for Links, in Complexity, Entropy, and the Physics of Information, Addison-Wesley, Redwood City, CA, pp. 3–28

 

 

  • Open access
  • 95 Reads
Information analysis of Foundation of Information Science (FIS) information exchange

INTRODUCTION

Information Science (IS) is a relatively new science that emerged after the Second World War, influenced by Bush's (1945) ideas, from the perspective of managing scientific information (Wersig & Neveling, 1975) or, according to some theorists, Otlet's (1934) thinking about documents and documentation. The first formulation of Information Science’s modern concept, occurred during two meetings at the Georgia Institute of Technology (1962). Following these two meetings its interdisciplinary character started to get recognized, but not explicitly, by most authors. In particular, in Brazil, this issue has been debated epistemologically (Pinheiro, 2009). Although documenting and retrieving information was its initial motivation, IS has grown and now studies Information in categorized contexts, for example in the US, by Asis&t's Special Interest Groups (SIGs) and in Brazil by Ancib's Working Groups. According to the classification of the areas of knowledge of the National Counsel of Technological and Scientific Development (CNPq), Information Science is an Applied Social Science. The historiography of the area has elicited perspectives and approaches that today place IS into a new sociological and humanistic approach, in which pragmatism and fields, such as Philology and Philosophy, play a relevant role in the epistemological re-discussion of IS.

The Foundations of Information Science (FIS - http://fis.sciforum.net/), an informal endeavor promoted by Michael Conrad and Pedro Marijuan, has been an attempt to "rescue the information concept out from its classical controversies and use it as a central scientific tool". In this way, from the point of view of the FIS, Information Science is a more comprehensive domain, that is, it is one of the four main pillars of science, together with the Physical, Biological and Social sciences. This long-term project, which began in 1992, discusses its ideas in a permanent electronic mailing list established since 1997 and in biannual conferences (Madrid 1994, Vienna 1996, Paris 2005, Beijing 2010, Moscow 2013, Vienna 2015 and Gotemburg 2017). The board of the FIS initiative is composed of a multidisciplinary group of 18 members (http://fis.sciforum.net/fis-board/) and the FIS mailing list has 351 members as of April 10, 2017. Yan Xueshan analyzed the content of the FIS messages from 1997 to 2007, extracting and discussing the approaches of the list members on topics of "Information Concepts", "Physical Information", "Bioinformatics", "Information society", "Other Information" and "Information Science" (Yan, 2016, pp. 587-638).

The results of this research show that Information Concept and Information in Physics are the two most discussed topics in the FIS List. It is also known that it is not possible to define Information uniquely, because it depends on the context (R Capurro & Hjørland, 2003; LOGAN, 2014). The possibility of a consensus could come through a non-reductionist theory that unifies the concepts of information (Rafael Capurro, Fleissner & Hofkirchner (1997). In addition, Bais & Farmer (2007), who describe the central role of information in thermodynamics, statistical mechanics, chaos theory, computer science, quantum theory and astrophysics, have reviewed the concept of information in the context of Physics.

OBJECTIVE

The objective of this research is to perform a quantitative analysis of FIS’s mailing list messages with the purpose of 1) classifying them into topic groups; 2) evaluating their evolution over time; and 3) identifying their main authors. A total of 5,375 messages exchanged between December 1997 and November 2016 were considered.

METHODOLOGY

The FIS list is an electronic forum for email exchange hosted by servers at the University of Zaragoza, Spain, where the list’s moderator, Pedro C. Marijuán, works. All messages used as data source for this analysis are available in three sites: Site 1 (1997-2007): http://fis-mail.sciforum.net; Site2 (February 2006-present): https://www.mail-archive.com/fis@listas.unizar.es/maillist.html; and Site 3 (April 2014 – present): http://listas.unizar.es/pipermail/fis/ . Site 1 is a static repository, Sites 2 and 3 are updated daily, outside and inside the university server, respectively. Sites 1 and 2 were downloaded using HTTrack Website Copier software, and all 5,375 FIS mailings, between December 6, 1997 and November 29, 2016, have been saved on a local computer. Each message is stored individually as an html file, but the message’s index, containing title, upload date and author, is displayed on one page on Site 1 and 16 pages on Site 2. Site 3 was not used in this search because of overlap with Site 2 messages.

The methodological procedures of this research were carried out in four steps:

Step 1: Export to Excel - The content of each message index was exported to an Excel spreadsheet which included four columns: message subject, author, date of posting, and html file name.

Step 2: Message Classification - A Discussion Topic was assigned to each message, based on the 50 topics available at http://fis.sciforum.net/fis-discussion-sessions/ (accessed November 29, 2016). The classification was based on content analysis of the first message of each thread and extended to their responses. We identified 19 additional new topics during this analysis, resulting on a total of 69 topics.

Step 3: Grouping topics - Similar topics were grouped together. We added the topic: "Administrative" for administrative messages, usually authored by the list moderator; "Announcement" for communications related to conferences and call for papers; and "Other Topics" for some messages that did not fit into any of the Grouped Topics.

Step 4: Compilation of results – We used Excel data analytical tools, mainly Pivot Table, for extracting and tabulating quantitative data that, together with content analysis, served as the basis for the interpretation of the results.

RESULTS

The 5,375 messages posted on the FIS mailing list between December 6, 1997 and November 29, 2016 were classified into 32 Grouped Topics, as shown in Table 1.

Table 1: Foundations of Information Science 5,375 messages classified into 32 Grouped Topics from 1997 to 2016.

Rank - Grouped Topic - 1997-2001 - 2002-2006 - 2007-2011 - 2012-2016 - Total
- All Topics (Number, %) - 407 (8%) - 2,096 (39%) - 1,178 (22%) - 1,694 (32%) - 5,375
1 - Information and Physics - 36 - 585 - 98 - 359 - 1,078
2 - Announcement - 119 - 331 - 171 - 203 - 824
3 - Definition of Information - 17 - 190 - 168 - 239 - 614
4 - Social Information - 79 - 57 - 64 - 144 - 344
5 - Biological Information - 52 - 140 - 21 - 31 - 244
6 - Information and Neuroscience - 0 - 11 - 85 - 117 - 213
7 - Information and Meaning - 3 - 112 - 85 - 8 - 208
8 - Administrative - 22 - 79 - 62 - 22 - 185
9 - Science - 7 - 20 - 68 - 74 - 169
10 - Phenomenology - 0 - 0 - 0 - 167 - 167
11 - Information and Economic - 0 - 95 - 24 - 12 - 131
12 - Information Theory - 0 - 5 - 114 - 8 - 127
13 - Information and Philosophy - 12 - 44 - 0 - 64 - 120
14 - Information and Logic - 0 - 0 - 102 - 9 - 111
15 - Semiotics - 0 - 19 - 0 - 75 - 94
16 - Consilience - 0 - 91 - 0 - 0 - 91
17 - Information and Chemistry - 0 - 43 - 43 - 0 - 86
18 - Informaion Science - 6 - 0 - 39 - 30 - 75
19 - Information and Ethics - 0 - 66 - 0 - 0 - 66
20 - Bibliometry - 0 - 58 - 0 - 0 - 58
21 - Information and Knowledge - 0 - 18 - 34 - 0 - 52
22 - Ecological Economics and Information - 0 - 45 - 0 - 0 - 45
23 - Scientific Commuication - 0 - 0 - 0 - 44 - 44
24 - Information and Mathematics - 0 - 0 - 0 - 36 - 36
25 - Information and Music - 0 - 35 - 0 - 0 - 35
26 - Information and Natural Languages - 33 - 0 - 0 - 0 - 33
27 - Information and Art - 0 - 26 - 0 - 0 - 26
28 - Information, Communication and Life - 0 - 0 - 0 - 26 - 26
29 - Other topics - 9 - 11 - 0 - 4 - 24
30 - Consciousness - 12 - 0 - 0 - 11 - 23
31 - Information and Symetry - 0 - 15 - 0 - 0 - 15
32 - Information and Computing - 0 - 0 - 0 - 11 - 11

The most discussed Grouped Topic was "Information and Physics" which included  the following topics: Information & Physics (1998), Information Physics (2002), Entropy and Information: Two Polymorphic Concepts (2004), Quantum Information (2006), The Nature of Microphysical Information: Revisting the Fluctuon Model (2010) and Quantum Bayesianism (QBism) - An interpretation of quantum mechanics based on quantum information theory (2014). The recurrence of the subject over the years and the number of messages (1078) indicate that information in the context of Physics is important to FIS list members.

The question "What is information?" appears on the FIS homepage (http://fis.sciforum.net/), so it was not surprising that "Definition of Information" occupied an important position in the ranking, here found to be in third place. There were three long discussions in 1999, 2015 and 2016, representing approximately 11% of all 5,375 messages. This topic also permeates the messages of other topics, since the concept of information is usually defined and/or questioned before the discussions. The definitions themselves and epistemological questions are discussed in the messages and one of the consensuses is that the concept of information is context dependent.

Surprisingly, "Information Science" was randked in18th place, an apparent contradiction to the list name and purpose.

The other lower ranking grouped topics were, most of the time, chosen according to the specialties of the leaders of the discussions.

Table 2 shows the 10 most productive authors on the FIS mailing list, the number of their documents indexed in the Scopus database, and their respective areas of interest retrieved from official sites and authors' CVs.

Table 2: List of the 10 most productive authors in the Foundation of Information Science (FIS) list, number of messages posted on FIS list and documents indexed by Scopus

Author - Number of messages on FIS list - Number of documents indexed by Scopus - Areas of interest
Pedro C. Marijuan - 871 - 34 - Information Sciences, Biology, Neuroscience
Loet Leydesdorff - 394 - 344 - Physics, Biology, Philosophy, Bibliometrics
Stanley N Salthe - 339 - 57 - Biology, Philosophy, Physics
John Collier - 220 - 27 - Philosophy, Biology, Information Theory, Systems Theory
Joseph Brenner - 202 - 20 - Theory and Philosophy of Information, Logic, Physics
Jerry LR Chandler - 178 - 27 - Chemistry, Biochemistry, Genetics, Complex Systems, Physics, Medicine
Karl Javorszky - 176 - 1 - Philosophy, Epistemology, Psychology
Rafael Capurro - 156 - 19 - Philosophy, Ethics, Information in social contexts
Søren Brier - 125 - 29 - Philosophy of science, Cybersemiotics, Biology
Steven Ericsson-Zenith - 118 - 0 - Biophysics, Computation, Bioengineering, Theory of Mind, Cosmology, Logic, Semiotics
Totals - 2779 - 558 -
 

The areas of interest comprise a multidisciplinary network that involves the discussions of the list in diverse contexts and points of view. In fact, the top 10 authors participated, on average, in 72% (23 of 32) of the Grouped Topics.

The number of co-authorships among the 36 main authors (not all listed in Table 2) is small. In fact, of 2,165 documents indexed in the Scopus database for these authors, only eight were produced together. Therefore, belonging to the FIS list does not seem to promote collaboration among its members.

Diversity of areas of interest and low number of co-authorships suggest that the cross disciplinary collaboration of FIS list takes place at the level of multidisciplinarity, the first of the three levels defined by Pombo (2004). This can be evaluated in future work that analyzes the relationship between threading and interdisciplinarity (Zelman & Leydesdorff, 2000). Unfortunately, since it is customary for FIS members to change the message subject when replying to a message, it will be challenging to count specific threads, which is essential for this type of analysis. One solution would be to suggest to the group of list participants that they preserve this "metadata" (i.e., message subject) to facilitate future research. In this sense, analyses of co-authorship and co-citation among group members, thus grouped, could reveal signs of interdisciplinarity.

We hope, with this communication, to pave the way for a deeper and more systematic study of the contents of the FIS-list messages, in order to index them so that their discussions serve as a basis for future research.

 

REFERENCES

Bais, F. A., & Farmer, J. D. (2007). The Physics of Information. arXiv preprint:0708.2837v2. Retrieved from https://arxiv.org/abs/0708.2837v2

Bush, V. (1945). As We May Think. The Atlantic, 176(JULY), 101–108. https://doi.org/10.1017/CBO9781107415324.004

Capurro, R., Fleissner, P., & Hofkirchner, W. (1997). Is a unified theory of information feasible? A trialogue. World Futures: Journal of General Evolution, 49(3–4), 213–234.

Capurro, R., & Hjørland, B. (2003). The concept of information. Annual Review of Information Science and Technology. Stuttgart. Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-0037246923&partnerID=40&md5=778eebb65c2d237722b09d1d865cbf12

Georgia Institute of Technology. (1962). Training Science Information Specialists. In National Science Foundation (Ed.), Proceedings of the Conferences (p. 139). Atlanta: Georgia Institute of Technology.

LOGAN, R. K. (2014). What is Information?: Propagating Organization in the Biosphere, Symbolosphere, Technosphere and Econosphere. (P. J. and G. Van Alstyne, Ed.). Toronto: DEMO Publishing. Retrieved from http://demopublishing.com/book/what-is-information/

Otlet, P. (1934). Traité de documentation : le livre sur le livre, théorie et pratique. Bruxelles: Eds Mundaneum.

Pinheiro, L. V. R. (2009). Configurações disciplinares e interdisciplinares da Ciência da Informação no ensino e pesquisa no Brasil. In Encontro de la Associación de Educación e Investigaciones en Ciencia de la Información de Iberoamérica y el Caribe (EDIBCIC) (pp. 99–111). Coimbra: Imprensa da Universidade de Coimbra.

Pombo, O. (2004). Interdisciplinaridade: Ambições e Linites. Lisboa: Relógio D’Água Editores.

Wersig, G., & Neveling, U. (1975). The phenomena of interest to information science. Information Scientist, 9(4), 127–140. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.5319&rep=rep1&type=pdf

Yan, X. (2016). Information Science: Concept, System and Perspective. Beijing: Science Press.

Zelman, A., & Leydesdorff, L. (2000). Threaded email messages in self-organization and science & technology studies oriented mailing lists. Scientometrics, 48(3), 361–380. Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-0040952900&partnerID=40&md5=479316dac39ef15e63a0e3433c08348c

  • Open access
  • 43 Reads
Physical information systems

Information is usually studies in terms of stings of characters, such as in the methods described by Shannon (1948), which applies best to the information capacity of a channel, and Kolomogorov and Chaitin, which applies best to the information in a system. Variations on the Kolmogorov approach, called Minimum Description Length by Jorma Rissanen (1978), and Minimum Message Length by C.S. Wallace and his associate, David Dowe (1968, 1999) as a Bayesian minimum message length. These work most easily for strings, but can be adapted to statistical distributions. The basic idea of both, though they differ somewhat in method, is that the most compressed for of the information in something is a measure of its actual information content, an idea I will clarify shortly. This compressed form gives the minimum number of yes, no questions required to uniquely identify some system. I will presuppose this is reasonable.

Collier (1986) introduced the idea of a physical information system. The idea was to open up information from strings to multidimensional physical systems. Although any system ca be mapped onto a (perhaps continuous) string, it struck me that it was more straightforward to go directly to multidimensional systems, as found in, say biology. Even DNA has a tertiary structure that can be functionally important, and thus contains information, though I did not address this issue in my article. A later article (2003) showed the hierarchical nature of biological information, and was more specific about the cohesion concept I used in 1986 to refer to dynamical unity.

The basic idea is to use statistical mechanics to describe an informational entropy that can self-organize in a way similar to chemical systems, and apply it to speciation, development, and other biological processes. Although notions such as temperature and free energy have no obvious correlates in evolutionary biology, it is possible to use the organizational and probabilistic aspects of entropy and information to introduce a notion of biological entropy. This is not necessarily the same as thermodynamic entropy, and the exact relation is open to question. What we do need, though, is that the underlying microstates are not correlated relative to the information at higher levels except for their general properties, so they act effectively randomly with respect to those levels. Traditionally, for example, variations in DNA have been thought to be random. Biological systems depend on available free energy, but it is there organization, passed on as information in hereditary processes that is most important. So they are primarily information systems.

Biological information is subject to two sorts of variation producing new information. First, potential information (information at the lower level) can be converted into stored information (expressed at a higher level), creating new expressions in the individual or new stable structures in a species, perhaps yielding a new species. The second sort of new information is produced by alterations to the genetic structure resulting in new information of one of the four possible types. Both types of new information add new possibilities, in the first case for development and environmental interaction, and in the second case, since it may involve the creation of potential information, for future expression as well. The increase in possibilities is generally faster than they are filled up, producing an information negentropy, or order at the higher level. This permits both order and complexity to increase together. Thus, there are two entropies important in biological systems, the entropy of information and the entropy of cohesion. The information of a system in a particular state is the difference between the maximum possible entropy of the state and its actual entropy. The entropy of information is the amount of information required, given the information of the state, to determine its microstructure. In other words, the entropy of information represents the residual uncertainty about the physical system after the ordering effect of the information contained in the biological system is subtracted.

A physical information system is a system containing stored information whose properties depend only on properties internal to the system. Stored information is like Shannon-Weaver information, except that like bound information it is physically real. It exists whenever there are relatively stable structures which can combine lawfully. These structures are the elements of the information system. The stored information of an element cannot be greater than its bound information (or else either lawfulness or the second law of thermodynamics would be violated), but the actual value is determined by its likelihood of combination with the other elements. The information content of a physical combination of elements (an "array") is the sum of the contributions of the individual elements. For example, the nucleic acids have a structure which contains a certain amount of bound information (they are not just random collections of atoms), and can interact in regular ways with other nucleic acids (as a consequence, but not the only one, of their physical structure). The stored information of a given nucleic acid sequence is determined by the a priori probability of that sequence relative to all the permitted nucleic acid sequences with the same molecules. The bound information, which will be greater, is determined by the probability of the sequence relative to all the random collections of the same molecules. (Nucleic acids, of course, have regular interactions with other structures, as well as to themselves in three dimensions, so the restriction of the information system to just nucleic acid sequences is questionable. We can justify singling out these sequences because of their special role in ontogeny and reproduction.) The lawful (regular) interactions of elements of an information system determine a set of (probabilistic) laws of combination, which we can call the constraints of the information system (see Shannon and Weaver 1949: 38) for a simple example of constraints). Irregular interactions, either among elements of the information system or with external structures, represent noise to the information system.

The elements of an information system, since they are relatively stable, have fixed bound information. It is therefore possible to ignore their bound information in considering entropy variations. The elements are the "atoms" of the system, while the arrays are the states. The stored information of an array is a measure of its unlikelihood given the information system. The entropy (sensu Brillouin 1962) of this unlikelihood equals the entropy of the physical structure of the array minus the entropy of the information system constraints. This value is negative, indicating that the stored information of an array is negentropic. Its absolute value is the product of the redundancy of the information system and the Shannon entropy. This is just Gatlin's (1972) stored information. Array entropy so calculated reflects more realistically what can be done with an information system than the Shannon-Weaver entropy. In particular, random alterations to an array make it difficult to recover the array.

This definition of array entropy is inadequate, since it is defined in terms of properties not in the system, namely the entropies of the constraints and the structure constituting the array. The entropy of a system is usually defined in terms of the likelihood of a given macrostate. Two microstates are equivalent macrostates if they have same effect at the macro level (ignoring statistical irregularities). If we assume that all states must be defined internally to the system, the above analysis of arrays does not allow any non-trivial macrostates; each macrostate has just one microstate. This forces a definition of entropy in terms of elements not in the system, or else a "cooked" definition, like Shannon-Weaver entropy. A satisfactory definition of array entropy must be given entirely in terms of the defining physical properties of the information system elements. Such a definition can be given by distinguishing between actual and possible array states.

By assumption, the elements of the system are relatively stable and combine lawfully to form arrays. Possible maximal arrays of elements are the microstates. The macrostates are the actual array states. The microstates of an array are the possible maximal arrays of which it is a part. The information and entropy of a macrostate are defined in the usual way in terms of probabilities of microstates. In abstract information systems this definition degenerates, since arrays can be arbitrarily large. In realistic information systems, though, there is an upper limit on possible array size (though it might be somewhat vague). In organisms the maximum array size is restricted largely by the lengths of the chromosomes. In species it is restricted to the maximum number of characteristics of a member. (There must be such a maximum, since the amount of genetic information is finite.) The array information is a form of bound information, but also has an entropy defined only in terms of the information system characteristics. The external entropy of the null array is the entropy of the constraints on the information system. The external entropy of a maximal array is the base line from which the internal entropy can be measured. It can be called the entropy of the information system. The size of the information system is the difference between these two entropies:

[1]          Size=H(constraints)-H(system).

The external entropy of an array is the internal entropy plus the entropy of the information system, equal to the entropy of the constraints minus the array information:

[2]          H(external)=H(internal)+H(system)=H(constraints)-I.

The internal entropy of information systems is an extension of the classical statistical entropy of thermodynamic systems. It treats information systems as closed with respect to information but open to matter and energy, whereas mechanical systems are closed if they allow energy to flow in and out of the system, but not matter. The internal entropy of an array is determined by the physically possible ways it could be realized, just as the entropy of a thermodynamic state is determined by its possible microstates. The internal entropy is no less physical than the thermodynamic entropy, unlike the sequence or configurational entropy of Shannon-Weaver information. Array information is a special case of message information, just as bound information is a special case of free information. In this sense it is not anthropomorphic to speak of a biological code or a chemical message.

Codes can be hierarchical. Units concatenated out of elements of a lower level can form natural elements of a higher level. An example is the hierarchy of characters, words and sentences. Sequences of characters terminating with a special character, like a space, comma or period, form possible words. Sequences of words terminated by a period or other sentence terminator form possible sentences. Not all possible words are words, nor are all possible sentences sentences. Otherwise the hierarchy would be trivial. Words are distinguished from non-words by having a meaning or grammatical function, and sentences are distinguished from non-sentences by being grammatical. Because these properties of words and sentences are useful, words and sentences tend to outnumber other character strings. Some non-words and non-sentences are present in the language, however, which are potentially words or sentences, since they would be so if they fell into common use.

Brillouin (1962: 55) points out that a more efficient code for English would exploit the fact that not all potential words are words by encoding words so as to permit fewer non-words. The information required per character could be reduced by a factor of more than two, yet the same amount of information could be conveyed by the same number of characters. An even larger reduction could be achieved by eliminating potential sentences, and even more, no doubt, by eliminating unverifiable sentences. This would not only make language learning difficult, but would also reduce the likelihood of change in the language.

Using Brooks-Wiley (1986) terminology, the distinguished set of higher level messages contain the stored information of the information system, while the variants contain the potential information. The stored information is what distinguishes a system from other systems. In physical information systems the basis of the individuation must be some physical property.

I will finish by discussing the levels relevant to biological information systems, and how the possibility of self-organization in this system is relevant to evolution.

 

References

Brillouin, L. 1962. Science and Information Theory, Academic Press, New York.

Brooks, D.R. and E.O. Wiley. 1986. Evolution as Entropy: Toward a Unified Theory of Biology. University of Chicago Press, Chicago.

Collier, John. 1986. Entropy in Evolution. Biology and Philosophy. 1: 5-24.

Collier, John. 2003. Hierarchical dynamical information systems with a focus on biology. Entropy  5: 100-124.

Gatlin, L.L. 1972. Information Theory and the Living System. Columbia University Press, New York.

Rissanen, Jorma. 1978. Modeling By Shortest Data Description. Automatica, Vol. 14Modeling By Shortest Data Description’, Automatica, Vol. 14: 465-471.

Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Tech. J., Vol. 27.

Shannon, C.E. and W. Weaver. 1949. The Mathematical Theory of Communication. University of Illinois Press, Urbana.

Wallace, C.S.and D.M. Boulton. 1968. An information measure for classification. Computer Journal, Vol 11, No 2: 185-194

C.S. Wallace and D. L. Dowe. 1999. Minimum Message Length and Kolmogorov complexity. Computer Journal, Vol. 42, No. 4: 270-283.

 

  • Open access
  • 58 Reads
Information and Intelligence in the Living Cell: A Fundamental Hiatus for Information Science?

The new panorama that computers and the new technologies have opened on the entire molecular processes of life, from bioinformatics to “omic” disciplines, and from systems biology to signaling science (to name but a few of the new bioinformational fields), have not cohered yet into a consistent informational scheme or new theory of the cell, so that further high-level characteristics such as meaning, fitness, complexity, and intelligence –closely related to the adaptive relationship with the environment– cannot be consistently approached. Rather, a spattering of dozens of specialized disciplines scarcely interconnected are dealing with multiple partial aspects. Subsequently, explaining the emergence of astonishing integrative inventions related to multicellularity, e.g., the origins of nervous systems and the further development of neuronal complexity, is left in the shadow.

An essential problem at the very root of today’s amazing accumulation of biomolecular data revolves around the absence of adequate conceptualizations on the LIFE CYCLE as the generalized source and sink of the information flows exchanged by the living system. Herein, leaving aside the specific matters related to the inner cellular informational dynamics, we will focus on the relationships between the advancing life cycle of the cell and the information flows of the environment.

What are the general conditions to advance a life cycle in the simplest cell? As an open self-producing system, a great variety of inputs and outputs are necessary (even for the simplest prokaryotes) not only consisting of matter and energy flows but also involving information flows. Our analysis herein will involve two basic aspects. On the one side, the structure of the prokaryotic signaling system itself, with all its variety of environmental signals and component pathways (what has been called the 1-2-3 Component Systems), including the role of a few second messengers which have been pointed out in bacteria too. And in the other side, the gene transcription system as depending not only on signaling inputs but also on a diversity of inner factors: from metabolic products, to sigma factors, to house-keeping systems, to channels and transporters, etc. Actually, in spite of this remarkable degree of complexity, the gene expression system of bacteria is highly systematic in its hierarchic organization and has been compared to computer operating systems. So, there is room to explore the organized and systematic convergence of stimuli from different signaling paths “encoding” integrated aspects of the environmental information flows. The specific life cycle of the bacterium will be the essential factor motivating the classes of convergence to be found.

In particular, if we want to ascertain the effect that a given signal or specific information flow produces, we must count the new molecular presences and absences derived from the gene expression consequences of the signaling event. The meaning of a particular signal is thus established through “molecular mining”. But there is no fixed reference there: the life cycle itself, in all its enormous multiplicity of possible ‘moods’ and trajectories, can only be established in retrospect, by ‘freezing’ it. At every instant we might look behind, we see that the reference that provides, generates, and fabricates the meaning has changed... The whole life cycle is but a temporal sequence of instantaneous meanings continuously churned out from the entire self-production processes and apparatuses of life. Looking from the angle of semiotic conventions, signals appear as compositional structures of the objects themselves, quite indistinguishable and inseparable from them and from the outer world as well; only with the advent of quorum signals and inter-species communication, we may partially distinguish between signals that denote the presence of a very important ‘animate’ object. As for the subjects, they appear themselves as life cycles in progress, and only that which pertains to the advancement of the life cycle has been evolutionarily incorporated as being part of the subjects’ own communication and energy flows.

We have briefly examined the connection of the life cycle with meaning, and similarly we could advance the interconnection with intelligence, complexity, fitness... Some of this work has already been started by the present author (Marijuán et al., 2011, 2013, 2015); but there is a long way ahead. In the opinion of this author, the lack of adequate connection with the life cycle represents a fundamental hiatus in our conceptions around biological information and its correlate of biological or natural intelligence, and all the other associated concepts. Seemingly, without a meaningful interconnection with the life cycle, the further relationships with the information approaches of the physical and computational realms, as well as with the miscellany of humanities’ fields, cannot be worked out properly.

Top