Displaying results 1-10 (of 62) on page 1. Go to page: 1 2 3 4 5 6 > >|

*Published: 17 July 2017*

**Abstract:**

In the present work an analytical model of the vortex motion elementary state of the dry atmosphere with nonzero air velocity divergence is constructed. It is shown that the air parcel moves along the open curve trajectory of spiral geometry. It is found that for the case of nonzero velocity divergence the atmospheric elementary state presents an unlimited sequence of vortex cells transiting from one to another. On the other hand, at zero divergence, the elementary state presents a pair of connected vortices, and the trajectory is a closed curve. If in some cell the air parcel moves upward then in the adjacent cell it will move downward and vice versa. At reaching the cell middle height the parcel reverses the direction of rotation. When parcel moves upward, the motion is of anticyclonic type in the lower part of the vortex cell and of the cyclonic type in the upper part. When parcel moves downward, the motion is of anticyclonic type in the upper part of the vortex cell and of the cyclonic type in the lower part.

*Published: 9 June 2017*

**Abstract:**

Dr. C.N.J. de Vey Mestdagh, University of Groningen, the Netherlands, c.n.j.de.vey.mestdagh@rug.nl

Extended abstract , submission date 15-4-2017

*The complexity of the universe can only be defined in terms of the complexity of the perceptual apparatus. The simpler the perceptual apparatus the simpler the universe. **The most complex perceptual apparatus must conclude that it is alone in its universe.*Abstract

The concept of complexity has been neglected in the legal domain. Both as a qualitative concept that could be used to legally and politically analyse and criticize legal proceedings and as a quantitative concept that could be used to compare, rank, plan and optimize these proceedings. In science the opposite is true. Especially in the field of Algorithmic Information Theory (AIT) the concept of complexity has been scrutinized.

In this paper we first have a quick look at AIT to see what it could mean in this phase of our research in the legal domain. We conclude that the there is a difference between problem complexity and solution complexity. In this paper we therefore start to develop a model of complexity by describing problem complexity in the legal domain. We use a formal model of legal knowledge to derive and describe the parameters for the description of the problem complexity of cases represented in this formal model. Further research will focus on refining and extending the formalization of the model of complexity, the comparison of problem and solution complexity for several legal cases using available algorithms and on the validation of the combined model against concrete cases and lawyers’ and legal organizations’ opinions about their complexity.

1. Complexity in the legal domain

The concept of complexity is hardly developed in the legal domain. Most of the descriptions of concepts related to complexity in legal literature refer to vagueness (intension of concepts), open texture (extension of concepts), sophistication (number of elements and relations) and multiplicity of norms (concurring opinions) - in most cases even without explicit reference to the concept of complexity. Complexity arises in all these cases from the existence and competition of alternative perspectives on legal concepts and legal norms.[1] A complex concept or norm from a scientific point of view is not necessarily a complex concept or norm from a legal point of view. If all parties involved agree, i.e. have or choose the same perspective/opinion - there is no legal complexity, i.e. there is no case/the case is solved. In science more exact definitions of complexity are common and applied. Complexity is associated with i.a. uncertainty, improbability and quantified information content. Despite this discrepancy between the legal domain and the domain of science, in the legal domain complexity is as important as in other knowledge domains. Apart from the obvious human interest of acquiring and propagating knowledge per se, complexity has legal, economic, political and psychological importance. Legal, because a coherent concept of complexity helps to analyse and criticize legal proceedings, in order to clarify them, to enable a justified choice of the level of expertise needed to solve legal cases, and to reduce unnecessary complexity (an example of reducing complexity by compression is given in the next paragraph); Economic, because complexity increases costs and measuring complexity is a precondition for the reduction of these costs (can help in designing effective norms, implementing them effectively, calculating and reducing the costs of legal procedures (cf. White, M.J., 1992), planning the settlement of disputes and other legal proceedings, etc.); Political, because legal complexity can be an instrument to exert power and can increase inequality; Psychological, because complexity increases uncertainty. A validated model of complexity in the legal domain can help to promote these interests. (Cf. Schuck, P.H., 1992; Ruhl, J. B., 1996; Kades, E., 1997).

**How to develop a model of complexity in the legal domain (methodology)**

In this paper we will try to bridge the gap between the intuitive definitions of complexity in the legal domain and the more exact way of defining complexity in science. We will do that on the basis of a formal model of legal knowledge (the Logic of Reasonable Inferences and its extensions) that we introduced before, that was implemented as the algorithm of the computer program Argumentator and that was empirically validated against a multitude of real life legal cases. The ‘complexities’ of these legal cases proved to be adequately represented in the formal model. In earlier research we actually tested the formal model against 430 cases of which 45 were deemed more complex and 385 less complex by lawyers. A first result was that the algorithm (Argumentator) when provided with case facts and legal knowledge was able to solve 42 of the more complex cases and 383 of the 385 less complex cases in exactly the same way as the legal experts did (including the systematic mistakes made by these experts). A second result was that the algorithm when instructed to do so improved the decisions in 30 (66%) of the 45 more complex cases and in 104 (27%$) of the 385 less complex cases. This result confirms the relative complexity of the first 45 cases. The selection of these 45 cases thus provides us with the material from which criteria for the definition of complexity in this paper could be derived. These criteria are translated to quantitative statements about the formal representation of the cases. Further research will focus on the fine tuning of this quantitative model by comparing its results with new empirical data (new cases and opinions of lawyers about the (subjective) complexity of cases). Finally the ability of the fine-tuned model to predict complexity in new cases will be tested. A positive result can be applied to reduce the aforementioned costs of processing of complex legal knowledge.

2. Models of complexity in science

There are many different definitions of complexity in science. The aim of this research is to develop a measure of complexity for formal representations of legal knowledge and their algorithmic implementations. In this abstract we will therefore refer to definitions of complexity from Algorithmic Information Theory (AIT), which studies the complexity of data structures (representations of knowledge in a computer). In AIT the complexity of data structures is equated with its information content. Complexity is postulated to decrease proportionate to the degree of (algorithmic) compressibility of the data structure. To assess the usefulness of AIT for our practical purpose, i.e. the design of a quantitative model of complexity of legal knowledge, we studied some publications from the domain of AIT. We read that complexity is approached as Algorithmic Probability (c.f. Solomonoff’s a priori probability), i.e. the higher the probability that a random computer program outputs an object, the less complex this object is considered to be. We read that complexity is approached as Algorithmic Complexity (c.f. Kolmogorov’s descriptive complexity), i.e. the shorter the code needed to describe an object (string), the less complex this object is considered to be. This is an interesting approach since it seems to offer a concrete measure for the complexity of certain objects (e.g. of legal problems) and it associates with the concept of compressibility which we are able to transpose as simplification (as opposed to sophistication) to the legal domain. Finally we read about Dual Complexity Measures (c.f. Burgin, 2006), which relates AIT to more complex problem structures and distinguishes the complexity of the system described (the problem and its solution) from the complexity of the description (the algorithm used to describe the problem and its solution). A common and essential aspect of these approaches is the compressibility of the object as a measure of its complexity. In all these cases the computer program is considered to be an explanation of a (more or less complex) object (or data structure). My conclusion is that these approaches will be useful when trying to prove certain characteristics of the model of complexity in the legal domain, once developed, but not primarily for the design of the model. We will have to describe the formal model and the algorithm (explanation) first. Just to get a practical insight in the concept of compressibility we did apply the idea of compressibility to some legal cases (see example below). However, many of the characteristics of legal cases that make them ‘complex’ according to lawyers are not directly related to compressibility. Moreover, often the most simple ‘palaver’ in the legal domain is meant to be incomprehensible and therefore misses the (semantic and relational) patterns that are needed to be compressible. Our conclusion is that this concept only partially covers the problem in the legal domain. We are eager to discuss this with our colleagues in the mathematical domain.

**An example of operand compression using logical equivalence in the legal domain**

Objects regulation U.1. appendix III Decree Indication Chemical Waste reads:

‘Waste products are **not** considered as chemical waste [cw] if they are objects [o] that have attained the waste phase of their lifecycle [wp], **unless**:

- This has happened
**before**they have reached the user [ru]; - This has happened
**after**they have reached the user [ru] and they are - transformers .. [1] .. 10. mercury thermometers. [10]’

De logical structure of this legal provision is:

not cw is implied by o and wp and not ((not ru) or (ru and (1 or .. or 10)))

Logically equivalent with this formalisation of the provision is the formula:

not cw is implied by o and wp and ru and not (1. or .. or 10)

which is a compression of the original provision.

Interestingly enough the retranslation of this equivalent formula to natural language is:

‘Waste products are **not** considered as chemical waste if they are objects that have attained the waste phase of their lifecycle and they have reached the user and they are not 1. transformers .. 10. mercury thermometers’.

Although this example illustrates that compression can be beneficial because it improves the readability of the regulation, it does not reduce its actual complexity which - in practice - is related to different opinions about the meaning of concepts like ‘Waste products’.

3. A formal model of legal knowledge (reasonable inferences)

The first step in developing a model of complexity in the legal domain is to describe the formal characteristics of legal knowledge that are related to the essence of complexity in this domain, i.e. the competition of opinions. In a previous publication (de Vey Mestdagh and Burgin, 2015) we introduced the following model that allows for reasoning about (mutually exclusive) alternative opinions and that allows for tagging the alternatives, e.g., describing their identity and context:

Our knowledge of the world is always perspective bound and therefore fundamentally inconsistent, even if we agree to a common perspective, because this agreement is necessarily local and temporal due to the human epistemic condition. The natural inconsistency of our knowledge of the world is particularly manifest in the legal domain (de Vey Mestdagh et al., 2011).

In the legal domain, on the object level (that of case facts and opinions about legal subject behavior), alternative (often contradicting) legal positions compete. All of these positions are a result of reasoning about the facts of the case at hand and a selection of preferred behavioral norms presented as legal rules. At the meta-level meta-positions are used to make a choice for one of the competing positions (the solution of an internal conflict of norms, a successful subject negotiation or mediation, a legal judgement). Such a decision based on positions that are inherently local and temporal is by definition also local and temporal itself. The criteria for this choice are in most cases based on legal principles. We call these legal principles metaprinciples because they are used to evaluate the relations between different positions at the object level.

To formalize this natural characteristic of (legal) knowledge we developed the Logic of Reasonable Inferences (LRI, de Vey Mestdagh et al., 1991). The LRI is a logical variety that handles inconsistency by preserving inconsistent positions and their antecedents using as many independent predicate calculi as there are inconsistent positions (Burgin and de Vey Mestdagh, 2011, 2013). The original LRI was implemented and proved to be effective as a model of and a tool for knowledge processing in the legal domain (de Vey Mestdagh, 1998). In order to be able to make inferences about the relations between different positions (e.g. make local and temporal decisions), labels were added to the LRI. In de Vey Mestdagh et al. 2011 formulas and sets of formulas are named and characterized by labelling them in the form (A* _{i}*, H

*, P*

_{i}*, C*

_{i}*). These labels are used to define and restrict different possible inference relations (Axioms A*

_{i}*and Hypotheses H*

_{i }*, i.e. labeled signed formulas and control labels) and to define and restrict the composition of consistent sets of formulas (Positions P*

_{i}*and Contexts C*

_{i}*). Formulas labeled A*

_{i}*must be part of any position and context and therefore are not (allowed to be) inconsistent. Formulas labeled H*

_{i}*can only be part of the same position or context if they are mutually consistent. A set of formulas labeled P*

_{i}*represents a position, i.e. a consistent set of formulas including all Axioms (e.g., a perspective on a world, without inferences about that world). A set of formulas labeled C*

_{i}*represents a context (a maximal set of consistent formulas within the (sub)domain and their justifications, c.f. the world under consideration). All these labels can be used as predicate variables and if individualized to instantiate predicate variables and consequently as constants (variables as named sets). Certain metacharacteristics of formulas and pairs of formulas were finally described by labels (e.g., metapredicates like Valid, Excludes, Prefer) describing some of their legal source characteristics and their legal relations which could be used to rank the different positions externally. The semantics of these three Predicates (Valid, Exclude and Prefer) are described in de Vey Mestdagh et al. 2011. These three predicates describe the elementary relations between legal positions that are prescribed by the most fundamental sets of legal principles (i.e. principles regarding the legal validity of positions, principles regarding the relative exclusivity of legal positions even if they do not contradict each other and principles regarding the preference of one legal position over another). It was also demonstrated that the LRI allows for reasoning about (mutually exclusive) alternatives.*

_{i}In (de Vey Mestdagh and Burgin, 2015) we showed that labels can be used formally to describe the ranking process of positions and contexts. With that the thus extended LRI allows for local and temporal decisions for a certain alternative, which means without discarding the non-preferred alternatives like belief revision does and without using the mean of all alternatives like probabilistic logics do. This extended the LRI from a logical variety that could be used to formalize the non-explosive inference of inconsistent contexts (opinions) and naming (the elements of) these contexts to a labeled logical variety, in which tentative decisions can be formally represented by using a labelling that allows for expressing the semantics of the aforementioned meta-predicates and prioritizing (priority labelling). In (de Vey Mestdagh and Burgin, 2015) we illustrated the use of these labels by examples.

In the next paragraph we will use the extended LRI to identify the quantitative parameters of complexity in the legal domain.

4. A formal model of the complexity of legal knowledge (parameters for a reasonable calculation of complexity)

The processing of legal knowledge takes place in successive phases. Each phase is characterized by its own perspectives and associated parameters of complexity. Roughly, first the different parties in a legal dispute take their positions, then the positions are confronted and a decision is made and finally the decision is presented. The complexity of the dispute differs from phase to phase. Again roughly, from intermediate (the separate positions), to high (the confrontation and decision making), to low (the decision itself). The separate positions are basically consistent and their contents can each be processed within a separate single logical variety. When the dispute starts complexity increases, because the shared axioms of the dispute have to be calculated and the positions are by definition mutually inconsistent and several calculi within the logical variety have to be used to calculate the joint process of the dispute and to decide between different hypotheses within the dispute. Ultimately the decision principles included in the different positions have to be used to rank the different consistent solutions. The dispute ends by presenting the highest ranking consistent (local and temporal) decision, representing a concurring opinion or a compromise. The complexity of this result is reduced again, because it can be (re)calculated within a single consistent variety. Below we will describe these phases in more detail and the related parameters of complexity in terms of the formal model introduced above.

In a certain case the complexity of the case can be quantified on the basis of the case elements and relations presented by all parties. The processing takes place in five phases:

At the start of legal knowledge processing the case can be described as:

- A number of sets n (the number of parties involved) of labelled formula H
representing the initial positions of each of the parties in a legal discourse, i.e. hypotheses_{i,l }_{i}_{ }of partiesabout the (alleged) facts and applicable norms in a legal case;_{l}

The next step is:

- Determining the intersection between these sets H
which defines A_{i,l }representing the agreed case facts and norms and determining the union of all complements which defines H_{i }; (A_{i}, H_{i}) represents the initial case description._{i}

The third step is:

- Calculating all possible minimal consistent positions P
that can be inferred from_{i }_{ }(A, H_{i}) applying a logic e.g. the LRI a logical variety that allows each position to be established by its own calculus. If these calculi differ this adds to the complexity of the problem. In earlier publications we assumed all the calculi to be the same (predicate calculus)._{i}

The fourth step is:

- Calculate all maximal consistent contexts (cf. possible consistent worlds) C
on the basis of (A_{i }, H_{i}, P_{i})._{i}_{ }

The last step is

- Make a ranking of these contexts on the basis of the application of the metanorms (decision criteria) included in them. A formal description and an example of this process are comprised in (de Vey Mestdagh and Burgin, 2015).

Each step in this process is characterized by its own parameters of complexity. In legal practice different procedures are used to determine and handle (reduce) complexity in these different phases.

In the first phase a direct, static measure of complexity is commonly applied. The number of parties and the number of Hypotheses. This is a rough estimate of the number of different positions (interpretations, perspectives, interests).

In the second phase a direct, relative measure of complexity is commonly applied. The number of A* _{i }*and its relative size to H

*. The larger the relative size of A*

_{i}*the less complex a case is considered to*

_{i }_{ }be, because there is supposed to be more consensus.

In the third and fourth phases all positions P* _{i }*and contexts C

*are derived:*

_{i}Given the resulting set of labelled formula (A* _{i}*, H

*P*

_{i},_{ }*C*

_{i},_{ }*) representing the legal knowledge presented in a certain case, the problem complexity of this set can be defined as follows:*

_{i}- The subset A
(agreed case facts and norms) is by definition included in each P_{i}and C_{i }so its inclusion as such is not a measure for complexity as it reflects absolute consent;_{i } - The elements of the subset H
are by definition not included in each P_{i}and C_{i }so the relative size of the inclusion of its elements is a measure of complexity as it reflects relative consent. If there is more conformity there is less complexity. It is even possible that certain elements of the subset H_{i }are not included in any P_{i}and C_{i }. The number of these ‘orphaned’ elements can also contribute to the complexity of a case, because they represent antecedents without consequent or consequents without antecedents (a decision is proposed without justification). Orphaned elements can be the result of incompletely presented positions or - worse - be smoke screens;_{i } - The relative size of the fraction of subset A
in (A_{i }, H_{i}) - relative to the fraction of A_{i}in other cases - is a measure of complexity as it reflects the size of shared (consented) knowledge in a legal dispute. This holds even if the size of A_{i}is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the A_{i }into consideration._{i} - The relative size of the fraction of subset H
in (A_{i}, H_{i}) - relative to the H_{i}in other cases - is a measure of complexity as it reflects the size of disputed knowledge in a legal dispute. This holds even if the size of H_{i}is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the H_{i }into consideration._{i} - The relative size of the subset P
(relative to the P_{i}in other cases) is a measure of complexity as it reflects the number of different minimal positions that can be taken logically in this specific case. The size of P_{i}can only be manipulated indirectly (through the respective sizes of_{i }_{ }Aand_{i }_{ }H)._{i} - The relative size of the subset C
(relative to the C_{i}in other cases) is a measure of complexity as it reflects the number of different consistent contexts (possible decisions) that can be distinguished in this specific case._{i}

In the fifth phase ranking of the contexts takes place.

The number of rankings depends on the inclusion of metanorms in the respective contexts. Metanorms that are agreed upon are part of A* _{i}*, metanorms that are not agreed upon are part of H

*The process of applying the metanorms is fully recursive, since the objects of the metanorms are other (meta)norms, which are themselves also part of (A*

_{i}.*, H*

_{i}*). This means that the determination of the complexity of the application of the metanorms is included in the previous phases. In this phase only the resulting number of rankings is established and can be considered to be an independent measure of complexity.*

_{i}5. Validation of the model of complexity

The model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph is based on prior theoretical and empirical research into the complexity of legal knowledge (de Vey Mestdagh, 1997, 1998). A total of 430 environmental law cases have been formally represented in the formal model of legal knowledge introduced in paragraph 3 (the extended LRI) and their relative complexity has been established on the basis of legal expert judgements. The opinion of the experts was that 45 cases were of a complex nature and 385 of a less complex (more general) nature. This has been verified by applying an expert system to these cases that was enabled (provided with more complete data and knowledge) to improve on the human judgements in the 430 cases. The test results have shown that in the complex cases 66% of the human judgements were improved by the expert system (of which 20% full revisions), while in the general cases only 27% of the human judgements were improved by the expert system (of which only 2% full revisions). The complex cases are characterized by higher counts of the parameters distinguished in the previous paragraph.

Further validation research is needed to refine the model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph. The relative weight of the counts of the parameters described will be varied against the available dataset of legal cases. The results will also be correlated with other variables that are available to gain further insight in possible parameters of complexity. Examples of these variables are: number of submitted documents, length of procedure, number of appeals, spending power of the parties involved, level of expertise of the lawyers involved, etc.

6. Conclusion and further research

In this paper we have explored the concept of complexity in the legal domain. A first conclusion is that the concept has not been studied explicitly in the legal domain. Only indirectly as a qualitative concept (vagueness, open texture, etc.) and hardly ever as a quantitative concept. However, a quantitative model of complexity in the legal domain has - apart from its scientific meaning per se – legal, economic and political implications. It will allow us to improve the quality and efficiency of legal proceedings. Algorithmic Information Theory offers several approaches to the quantification of complexity that inspired the approach chosen in this paper. It induced the thought that a distinction between problem complexity and resolution complexity is necessary and that a model of complexity based on the formal representation of legal knowledge should be the first step in developing a model of complexity in the legal domain. In this paper we give a description of a formal representation of legal knowledge (the extended Logic of Reasonable Inferences) and we describe the quantitative parameters of complexity for this model. The result we would like to call Reasonable Complexity, because it is based on the LRI and because it inherits its relative, perspective bound character. Complexity is specifically relative to the number of perspectives combined in the knowledge under consideration. Further research will focus on extending the model of complexity to resolution complexity, using - amongst others – available algorithms (i.a. Argumentator, a computer program we developed to implement the LRI). It will also use an available dataset of 430 environmental law cases that have been described and analysed before and that have already been represented in Argumentator.

References

Burgin, M.: Super-Recursive Algorithms, Springer Science & Business Media, 2006

Burgin, M., de Vey Mestdagh, C.N.J.: The Representation of Inconsistent Knowledge in Advanced Knowledge Based Systems. In: Andreas Koenig, Andreas Dengel, Knut Hinkelmann, Koichi Kise, Robert J. Howlett, Lakhmi C. Jain (eds.). Knowlege-Based and Intelligent Information and Engineering Systems, vol. 2, pp. 524-537. Springer Verlag, ISBN 978-3-642-23862-8, 2011

Burgin, M., de Vey Mestdagh, C.N.J.: Consistent structuring of inconsistent knowledge. In: J. of Intelligent Information Systems, pp 1-24, , Springer US, September 2013

Dworking, R.: Law's Empire, Cambridge, Mass., Belknap Press, 1986

Hart, H.L.A.: The Concept of Law, New York, Oxford University Press, 1994

Kades, E.: The Laws of Complexity & the Complexity of Laws: The Implications of Computational Complexity Theory for the Law (1997). Faculty Publications. Paper 646. http://scholarship.law.wm.edu/facpubs/646

Ruhl, J. B.: Complexity Theory as a Paradigm for the Dynamical Law-and-Society System: A Wake-UpCall for Legal Reductionism and the Modern Administrative State. Duke Law Journal, Vol. 45, No. 5 (Mar., 1996), pp. 849-928

Schuck, Peter H.: Legal Complexity: Some Causes, Consequences, and Cures. Duke Law Journal, Vol. 42, No. 1 (Oct., 1992), pp. 1-52

Vey Mestdagh, C.N.J. de, Verwaard, W., Hoepman, J.H.: The Logic of Reasonable Inferences. In: Breuker, J.A., Mulder, R.V. de, Hage, J.C. (eds) Legal Knowledge Based Systems, Model-based legal reasoning, Proc. of the 4th annual JURIX Conf. on Legal Knowledge Based Systems, pp. 60-76. Vermande, Lelystad, 1991

Vey Mestdagh, C.N.J. de.: Juridische Kennissystemen, Rekentuig of Rekenmeester?, Het onderbrengen van juridische kennis in een expertsysteem voor het milieuvergunningenrecht (proefschrift), 400 pp., serie Informatica en Recht, nr. 18, Kluwer, Deventer, 1997, ISBN 90 268 3146 3;

Vey Mestdagh, C.N.J. de. Legal Expert Systems. Experts or Expedients? In: Ciampi, C., E. Marinai (eds.), The Law in the Information Society, Conference Proceedings on CD-Rom, Istituto per la documentazione giuridica del Centro Nazionale delle Richerche, Firenze, 2-5 December 1998, 8 pp.

Vey Mestdagh, C.N.J. de, Hoepman, J.H.: Inconsistent Knowledge as a Natural Phenomenon: The Ranking of Reasonable Inferences as a Computational Approach to Naturally Inconsistent (Legal) Theories. In: Dodig-Crnkovic, G. & Burgin, M. (Eds.), Information and Computation (pp. 439-476). New Jersey: World Scientific, 2011

Vey Mestdagh, C.N.J. de, Burgin, M.: Reasoning and Decision Making in an Inconsistent World: Labeled Logical Varieties as a Tool for Inconsistency Robustness. In: R. Neves-Silva, L. C. Jain, & R. J. Howlett (Eds.), Intelligent Decision Technologies. (pp. 411-438). Smart Innovation, Systems and Technologies; Vol. 39. Springer, 2015

White, M.J.: Legal Complexity and Lawyers’ Benefit from Litigation. International Review of Law and Economics (1992) 12, 381-395.

[1] Cf. H.L.A., Hart, who uses the concept of discretion to characterize hard (complex) cases, in The Concept of Law, New York, Oxford University Press, 1994; and R. Dworking, who distinguishes easy from hard cases using the concept of principled interpretation, in Law's Empire, Cambridge, Mass., Belknap Press, 1986; Although fundamentally differing in their opinion about the sources of the decision criteria, they both acknowledge the alternative perspectives that play a role in deciding complex cases (the judge’s discretion in the light of the parties alternative perspectives vs. the judges principled interpretation in the context of the parties alternative perspectives).

*Published: 9 June 2017*

**Abstract:**

**Abstract**—Biomimetics is the examination of nature, its models, systems, processes, and elements to emulate or take inspiration from, in order to solve human problems. The term Biomimetics comes from the Greek words bios, meaning life, and mimesis, meaning to imitate. Applications of Biomimetics have led to innumerable advances in science and engineering. The Computer Science field is no exception. The von Neumann Architecture, on which modern computers are based, took significant inspiration from the brain. The human mind represents the pinnacle of natural creation when it comes to logical processing of information. Conceptually, computer systems are information processors, like our minds. As a consequence of Biomimetics principles, computer systems can be significantly improved by mimicking the conceptual model used by the mind: overall complexity, mathematical foundation, encapsulation, decoupling, scalability, interoperability, and realistic correspondence.

**Galvis E.** A. & Galvis D. E., Freedom Software

1. INTRODUCTION & MOTIVATION

The main contribution of this abstract is the specification of the *Turing-complete* Conceptual computing model mimicked from the mind. A mathematical formulation for the model is described. It has been recommended that in order to fully describe the proposed model, we will need to rely on conceptual models studied by several disciplines including psychology, philosophy, and cognition (see Related Work). They will be mentioned when appropriate. *Relevant ideas such as realism (realistic correspondence), reductionism, and Occam's razor have had a prominent impact on science and scientific models [30, 29, 35]*; specifically, in the field of computer science where researchers have leveraged them before. Philosophical realism states that reality exists independent of the observer and that the *truth of a representation or model is determined by how it corresponds to reality (The Correspondence Theory of Truth[30])*. There are several key aspects that serve as motivation for *reductionism* [52]. Among them: *a)** Unification of science. b) Minimization of terms and concepts used by theories in order to encourage theoretical simplicity and eradicate redundancy (Occam’s razor). This aspect should make science more accessible and easier to learn (learning curve).c) Filling in of gaps and elimination of contradictions between theories. *

The issues associated with information technologies and solutions based on traditional models of computing have been studied by several authors [26,36,14,13,20]. The list of limitations includes the following.

Consider the following quotes: “We can note in passing that one of the ** biggest problems** in the development of object-oriented SW architectures, particularly in the last 25 years, has been an enormous over-focus on objects and an under-focus on messaging (most so-called object-oriented languages

*don’t really*use the looser coupling of

**, but instead use the much tighter**

*messaging**gear meshing of procedure calls*– this hurts

**).” Alan Kay et al.**

*scalability and interoperability*“the complex machinery of procedure declarations including elaborate naming conventions, which are further complicated by the substitution rules needed in calling procedures. Each of these requires a complex mechanism to be built into

the framework so that variables, subscripted variables, pointers, file names, procedure names, call-by-value formal parameters, and so on, can all be properly interpreted.“[26].

The implementation of traditional multithreaded/distributed information technologies and solutions is a complex endeavor, costly, and hampered by risks [26,14,13,4,5,15]: a) Complexities dealing with distributed information technologies [13, 14, 15, 20]. b) Complexity dealing with multithreaded information technologies. c) Mathematical foundations associated with traditional software/information technologies are often “complex, bulky, and not useful*” *[26]. “Another important shortcoming is their lack of useful mathematical properties and the obstacles they present to reasoning about programs” [26].

In his paper titled “Can Programming Be Liberated From the von Neumann Style?”, Backus described several of the multiple issues associated with traditional models of computing [26]. Such issues are still relevant today. Several of them have been mentioned in this section. Several authors have also suggested the need for new models more adaptable, flexible, reliable, interactive, and natural – founded on information processing and natural computation [36, 29].

“This essay presents several ideas that combined result in a new view of natural, interactive computing as *information processing*, with *broad consequences* relevant for not only computer science and closely related fields of physics, mathematics and logic but even for traditionally non-mechanizable fields of biology, cognitive sciences and neuroscience. We have still much to learn from *nature* and especially from naturally intelligent *information processing* systems such as humans and animals which will bring about *new models of computation and intelligence *[*A=(f(m),I)/C*].” Gordana Dodig-Crnkovic [36]

2. Mathematical Model (Information Machines)

“The truth that the ultimate *laws of thought are mathematical in their form*, viewed in connexion with the fact of the possibility of error, establishes a ground for some *remarkable conclusions*. If we directed our attention to the scientific truth alone, we might be led to infer an almost exact parallelism between the intellectual operations and the movements of external nature.” George Boole (!) [1].

“My contention is that machines can be constructed which will simulate the behavior of the human mind very closely.” Alan Turing

There are three main concepts involved as part of the mathematical computing model. The mathematical formulation is very intuitive, based on abstractions that everyone can relate to and readily grasp.

**Information Machine (A):** An automatic machine able to perform computations via the information primitive which defines the machine’s single function (or purpose). The machine A is defined by a two-tuple A= (*processInformation(message)*, I). A is Turing complete. It can also be expressed as *A= (f(m), I)**,* where *f *represents any computable function.

**Message (m):**incoming information is processed in the forms of messages (m), also called information packets or chunks (IC). A message (m) is expressed by a n-tuple m = (b1, … , bn) where b1, … , bn are symbols in a finite set of symbols (alphabet (∑)).**Information (I):**Information machines include a memory subcomponent able to store and retrieve information. The information stored (i.e. known) by the machine is represented by I = (IC1, … ,ICn), where IC1, … ,ICn are information chunks (or packets). ICi = (b_{i1}, … , b_{in}); b_{i1}, … , b_{in}ϵ ∑; i ϵ [1.. n].**Information primitive**:*processInformation (message)*represents a function*f:*∑**à*∑*

To be rigorous, ∑ needs to be included as part of the machine definition: ** A= (f(m), I, **∑

*)**.*For the sake of simplicity, it is usually excluded. Information itself (I) can be classified into two categories: conceptual (C) and non-conceptual. The following definitions apply to conceptual information.

**Concept (C)**: conceptual information expressed by a single language Construct, C = (a1,a2, … ,an) where a1,a2, … ,an are information associations.**Information association****(a)**: a_{i}is an association of the form (xi, yi), meaning that xi is equal or associated to yi (xi = yi) where xi and yi are defined as follows.

- xi = (b1, …, bn); b_{i} ϵ ∑; i ϵ [1 .. n], or xi represents a concept as defined by C.

- yi = (b1, …, bn); b_{i} ϵ ∑; i ϵ [1 .. n], or yi represents a concept as defined by C.

In summary, the model consists of following main concepts: information, messaging, and processing of information as defined by a single mathematical function (*f(m))*. It should become obvious why information is the fundamental concept behind the model. A complex problem has been reduced, via conceptualization, to a complete and streamlined set of implementable concepts part of a straightforward mathematical model (Turing complete). Messaging is tightly intertwined with the concept of information. Through the concept of messaging, information is transferred and the machine is able to interact with its physical environment. To visualize the natural concepts involved, you may want to think about the human mind and the associated entities. Obviously, since the model attempts to mimic the mind, there is a realistic *one-to-one* correspondence. As usual, nature is leading the way in terms of a paradigm for computing and information processing: *conceptual paradigm*. One straightforward implementation of the Information Machine is via an encapsulated and decoupled component or object, consisting of a single method with a single parameter (message)[2, 4, 5, 9]. The method may return information in the form of a concept (C). A component/object that implements the Information Machine abstraction is called a Live or Animated component (see Implementation Considerations) [2, 4, 5, 9]. It can be visualized as a computer (mini-computer) since both have equivalent processing power.

From a conceptual perspective, it should be clear from observation of reality that the same three concepts leveraged by the Turing-complete information machine apply to the mind, in agreement with realism and The Correspondence Theory of Truth [30]. The information machine represents a *model* of the mind. It should be stated that scientific *models *do not need to be exact in order to be valid, but approximately true (see Models in Science [30]). Multiple valid models of the same natural phenomena are also feasible. Consider the weather models, for instance. Regardless of how closely the proposed mathematical *model* mimics the conceptual mind, it is *Turing complete*; also presents a wide range of measurable qualities applicable to information technologies (see Model Evaluation and Metrics [9]).On the other hand, it should be emphasized that realism (i.e. realistic correspondence) is a key aspect while evaluating scientific models.

Additional aspects related to realistic correspondence between the proposed mathematical model and the conceptual mind have been studied and documented in more detail (see Physical Foundation [9] and Related Work). Most of such aspects, mainly related to cognitive architectures, are substantial and beyond the scope of this paper, which focuses on the computing model (information machine) and its mathematical specification (*A=(f(m),I)/C)*.

3. Consequences

The consequences and qualities associated to the Conceptual computing model are derived from its mathematical foundation. Applications and components built based on the model inherit such qualities [9,4,5,2]. It should be obvious why information is the model’s fundamental concept which does not come as a surprise since we are dealing with *information* technologies. Every aspect should be viewed from the standpoint of information.

**Simplicity and Occam’s razor**: the conceptual model is straightforward – Turing-complete information machine (** A= (f(m),I))**, single information primitive, and single Concept construct to represent information (C). The Conceptual model is perhaps the simplest one, yet Turing complete. All redundant abstractions and primitives add complexities and are unnecessary. Consider the simplification in terms of the number of concepts, components, and single primitive required for implementation: most entities in the world around us need to be realistically represented as concepts (C) because they are unable to process information (passive entities). The approach and associated mathematical formulation reduce (i.e.

*conceptualize*) the universe of

*information*technologies to a streamlined set of implementable concepts: information, messaging, and information machine.

“William of Occam opposed the proliferation of entities, but only when carried beyond what is needed --procter necessitatem! … But computer scientists must also look for something *basic* which underlies the various models; they are interested not only in individual designs and systems, but also in a *unified theory* of their ingredients.” Robin Milner [29]

**Completeness**: the Conceptual approach is based on a Turing-complete information machine (** A=(f(m),I))**, and single language construct (Concept) [4,9]. Turing completeness has been demonstrated via formal proof (see Information Machine and Turing Completeness). Therefore, it can be used for the complete conceptualization and implementation of

*arbitrary*information technologies.

**Encapsulation****.** The Conceptual model and associated abstractions improve encapsulation. Component functionality, information (I), and processing mechanism are encapsulated into a single entity. They should not be artificially modeled as separate objects or components. It should be fairly obvious that the information machine (*A= (f(m),I*)) is a fully encapsulated and independent entity.

**Coupling****.** Decoupling is improved by the Conceptual model. Component functionality, processing/treading mechanism, and messaging mechanism are decoupled. Each one can be modified independently without impacting the others. Again, it should be fairly obvious that the information machine (*A=(f(m),I*)) is fully decoupled from its environment. The Conceptual model does not present the web of interdependencies required by traditional APIs based on “gear meshing of procedure calls”.

**Interoperability**: the Conceptual model helps improve interoperability [9,4,5,2]. The concept construct of the form C = {(x1, y1), … , (xn, yn)} can be freely transferred between systems and components regardless of technologies, languages, protocols, and data representation. The same principle applies to any arbitrary concept (C). In a sense, the concepts construct (C) is a fluid abstraction that can be interchanged between heterogeneous technologies, systems, components and applications. A process, based on the Turing-complete Conceptual computing model, can transparently incorporate components (*A=(f(m),I))* and applications that use multiple technologies, languages, platforms, protocols, and so forth.

**Scalability****:** As discussed by Alan Kay et al, technologies and models based on gear meshing of procedure calls present drawbacks in terms of scalability (see Introduction & Motivation). On the other hand, the Conceptual computing model does not present scalability limitations; client component, server component, and communication mechanism are decoupled. Servers can be upgraded one by one without an impact on the client application and the rest of the infrastructure. Once all the servers have been gradually upgraded, clients can be upgraded to take advantage of the new software functionality. As a consequence, an infrastructure based on the Conceptual model can scale well and handle an arbitrary number of servers and clients 24/7. This application of the proposed approach assumes that the new software version relies on backward-compatible messaging.

**Realistic correspondence:** there is an accurate correspondence between the Conceptual model and the way information is transferred and processed in the real world (The Correspondence Theory of Truth[30]). In particular, the ** unified** model attempts to mimic the mind’s conceptual framework. Artificial abstractions/primitives are redundant and may exhibit complexity, limitations, and/or inefficiencies (like gear meshing of procedure calls). Notice the faithful correspondence between the mathematical model and reality. All the relevant concepts are included: information (concept construct (C)), messaging, and processing of information. All of them cooperate in harmony and unity. The Conceptual computing model is also in close correspondence with or supported by leading psychological, cognitive, and philosophical theories of the mind (see Related Work).

“It is necessary to remark that there is an ongoing synthesis of computation and communication into a **unified** process of *information processing*. Practical and theoretical advances are aimed at this synthesis and also use it as a tool for further development. Thus, we use the word computation in the sense of *information processing* as a whole. Better theoretical understanding of computers, networks, and other information processing systems will allow us to develop such systems to a

higher level. ” Mark Burgin [36]

4. IMPLEMENTATION CONSIDERATIONS

The separation between the involved concepts (model) and their implementation needs to be emphasized, which is a common characteristic found in related approaches [46], [11]. Thus, multiple valid implementations (i.e. realizations) of the same Turing-complete mathematical model are feasible. As a specific example, a Turing machine represents a mathematical model that can have multiple realizations.

One straightforward software implementation of the Turing-complete information machine *(A=(f(m),I))* is via an encapsulated object or component consisting of a single method with a single parameter (message). Such component/object is called a Live or Animated component. As an example, the following software snippet uses Java/Android for implementation. The appendix includes a complete example.

public Class AnimatedComponent {

/* Process component messages */

public Object processMessage (Object message) {

Object reply;

// Add logic to process the message here. Intuitively, any function or procedure can be implemented.

//Optionally, auxiliary internal methods (private) may be invoked from this

// single information primitive implementing messaging.

...

// Return a reply (output)

return (reply); } }

Due to Turing-completeness, the Live/Animated component is able to implement any computable function or algorithm (*f(m))*. Therefore, Live/Animated components can be leveraged to implement arbitrary information technologies. For instance, the group of components required to provide comprehensive distributed capabilities: distributed access, messaging, and security.

5. Information Machine and Turing Completeness

This section discusses the proposed mathematical model and demonstrates its Turing-completeness. A Turing machine is specified as a 7-tuple *M= (Q,**Γ,b,Σ ,δ,q _{0}, F)*. Given an arbitrary Turing machine, let us demonstrate that an equivalent information machine (

*A=(f(m),I))*can be built based on the information primitive

*f(m).*To be rigorous, ∑ needs to be included as part of the machine definition:

*A= (f(m),I,∑)*. For the sake of simplicity, it is usually excluded.

The machine tape can be implemented as an array, vector, or any other comparable data structure. It is part of the information (I) stored in the machine’s memory subcomponent. The machine’s transition table, current state, initial state, and set of final states are also part of (I).

// Pseudocode implementation based on the information primitive *(f(m))*.

// The message (m) consists of a single symbol.

void processInformation (symbol) {

Transition transition; // Consists of next state, symbol to be written,

// and tape movement (‘L’ or ‘R’)

// Transition table being replicated.

transition = transitionTable[currentState, symbol];

// The following two operations on the machine tape mimic the ones implemented

// by a Turing machine

updateTape(transition.symbol); // Update the machine tape* *

moveHead (transition.movement); // Move the head

currentState = transition.nextState ; // Part of the information stored in the

// machine’s memory

}

For any arbitrary Turing machine (M), an equivalent information machine (A) can be built, which demonstrates that *A= **(f(m),I)* is Turing complete. As a consequence, and based on the Church-Turing thesis, any computable function or algorithm can be computed by using the information machine (A).

*f:*∑**à* ∑* is a generalization of *processInformation(symbol) *applicable to messages (information chunks) of finite length (∑*), as opposed to a single symbol. Animated/Live components represent a software implementation of Turing-complete information machine. In other words, Animated/Live components based on the information primitive (*f(m))* can be used to implement any arbitrary computer technology, protocol, language, and/or framework including secure, distributed, and fault-tolerant technologies. There is an alternative approach to demonstrate Turing completeness (see Information Machine and Turing Completeness [9]). Intuitively, for any computable function (*f’(m))* of your choosing*, * an information machine or corresponding Animated/Live component can be built (*A= **(f’(m),I)) *to compute it.

6. Evidence, EVALUATION, and Metrics

*A reference implementation of the Conceptual computing model has been produced which demonstrates its applicability and qualities in a tangible fashion (tangible evidence). An evaluation of the model and its reference implementation has been performed in qualitative and quantitative terms (see Model Evaluation and Metrics [9])***. Several production quality applications have been built based on the reference implementation of the Conceptual model. Tangible research results based on the Conceptual approach have been published earlier [4, 5, 2]. Turing completeness has been demonstrated via formal proof (see Information Machine and Turing Completeness). **

7. Related Work

The study of the conceptual mind is a multidisciplinary endeavor. Multiple related disciplines have made significant contributions: psychology, neuroscience, computer science, mathematics (logic), and philosophy. *Relevant ideas such as realism (realistic correspondence), reductionism, and Occam's razor have had a prominent impact on science and scientific models [30, 29]*: specifically, in the field of computer science where researchers have leveraged them before. Clearly, these ideas have significant relevance to the realistic computing model being presented.

The Conceptual approach represents a *mathematical computing model (Turing complete)*. Multiple models of computing have been proposed [26]: operational, applicative, and von Newmann models (see Model Evaluation and Metrics [9]). Through the years, additional approaches of parallel computation have been proposed [45]: PRAM (parallel random-access machine), BSP (Bulk synchronous parallelism), and LogP. Related mathematical models of concurrency have been proposed: Actor Model, Process Algebras, and Petri Nets [7, 29]. All of these models of computing have a distinctly different mathematical foundation. There are conceptual differences as well, in terms of degree of realistic correspondence, abstractions, simplicity (Occam’s razor), single information primitive, applicability (focus/ problem area), natural inspiration (Biomimetics), and overall goal of mimicking the mind’s framework/model for information processing (see appendix on Related Models and Approaches [9]).

The Turing-Complete Conceptual model can be applied to the implementation of arbitrary computing/information technologies. Due to its versatility and wide applicability, it can be compared to a large variety of related models which is challenging because of diversity and number. However, due to size constraints, it is not feasible to completely cover all of them within this section. For a complete discussion, please check reference 9. It should be stated that all related models, technologies, reuse approaches, and architectural styles are distinctly different because of their mathematical foundations (i.e. model). As a general rule of thumb, if the technology, approach or architectural style is not based on the proposed Turing-complete mathematical formulation **(***A=(f(m),I)***/ C)**, then it is clearly different. Furthermore, if the underlying model consists of more abstractions than the ones proposed (3), there is redundancy that should be ‘shaved away’ according to reductionism/Occam’s razor and the concepts exhibited by the natural mind. Redundant abstractions bring forth unnecessary complexity. In agreement with Occam’s razor, reductionism, Biomimetics, and the Turing-complete Conceptual computing model, all

*information*models, technologies, approaches, and architectural styles can be reduced (unified/simplified) to the concepts part of the model:

*information*, messaging, and information processor/machine (

*A=(f(m),I)).*

As mentioned before, the Turing-complete Conceptual model is in correspondence with and/or supported by well-known theories and related disciplines [9, 30, 25, 31, 32, 21, 53]: computational theory of mind (CTM), The Language of Thought Hypothesis (LOTH), cognitive psychology, psychology, psychological associationism, Unified Theories of Cognition (UTC), Physical Symbol System Hypothesis (PSSH), and philosophical conceptualism/realism. References 9 covers the realistic correspondence between the mathematical model and the conceptual mind in more detail, as part of the cognitive area, which is substantial and beyond the scope of this paper (see Cognitive/AI Architecture and appendix on Related Theories, Studies, and Research[9]).

A Conceptual computing model is a broad subject applicable to a wide variety of information technologies and problem areas (due to Turing completeness). It is not feasible to cover all the relevant information and supporting evidence within a single document. Thus, this extended abstract includes *multiple cross references*. If a specific area of interest seems to be missing information, I would recommend you to review the references. Reference 9 provides a more detailed picture of the overall effort. It covers information that had to be excluded or condensed due to size constraints. In particular, comparisons with related models and approaches are discussed (see appendix on Related Models and Approaches [9]). Reference implementation of the model, formal demonstrations (Turing completeness), detailed evaluation/metrics, code examples, and additional implementation/technical consideration are also discussed in detail.

REFERENCES (for actual/complete paper)

[1] George, B. *An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities.* Macmillan. Reprinted with corrections, Dover Publications, New York, NY, 1958. Originally published in 1854.

[2] Galvis, A. *Messaging Design Pattern and Pattern Implementation*. 17^{th} conference on Pattern Languages of Programs - PLoP 2010.

http://sites.google.com/site/conceptualparadigm/documents/Messaging.docx?attredirects=0&d=1

[3] Turing, A. *Computing Machinery and Intelligence.* Mind 1950.

[4] Galvis, A. *Process Design Pattern and a Realistic Information Model*. 18^{th} conference on Pattern Languages of Programs (writers’ workshop) - PLoP 2011.

http://sites.google.com/site/conceptualparadigm/documents/Realistic.docx?attredirects=0&d=1

[5] Galvis, A. *Messaging Design Pattern and Live or Animated Objects*. 18^{th} conference on Pattern Languages of Programs (writers’ workshop) - PLoP 2011.

http://sites.google.com/site/conceptualparadigm/documents/AnimatedComponent.docx?attredirects=0&d=1

[6] Lamport, L. *The implementation of Reliable Distributed Multiprocess Systems*. Computer Networks. 1978.

[7] Hewitt, C. E. et al*. Actors and Continuous Functionals* *.* MIT. Laboratory for Computer Science. MIT/LCS/TR - 194, December 1977.

[8] Gregor Hohpe and Bobby Woolf. *Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solution.* Addison-Wesley, 2004.

[9] Galvis, E. A. *Conceptual Model (Compilation of papers and reports)*

http://sites.google.com/site/conceptualparadigm/documents/ConceptualE.docx?attredirects=0&d=1

[10] Galvis, E. A. *Jt - Java Pattern Oriented Framework, An application of the Messaging Design Pattern*. IBM Technical Library, 2010.

[11] Gamma, E. et al. *Design Patterns: Abstraction and Reuse of Object-Oriented Design. *ECOOP '93 Proceedings of the 7th European Conference on Object-Oriented Programming.

[12] Fielding, R. T. *Architectural Styles and the Design of Network-based Software Architectures. Ph.D. Dissertation.* University of California, 2000.

[13] Bih, J. *Service Oriented Architecture (SOA) a new paradigm to implement dynamic e-business solutions*. ACM ubiquity, August, 2006.

[14] Henning, M. *Rise and Fall of CORBA*. ACM queue, June, 2006.

[15] Loughran S. et al. *Rethinking the Java SOAP Stack.* IEEE International Conference of Web Services (ICWS) 2005. Orlando, USA, 2005.

[16] Schneider, F. B. *Implementing fault-tolerant services using the state machine approach: A tutorial.* ACM Computing Surveys. 1990.

[17] Michael B. et al. *BPELJ:BPEL for Java*.BEA Systems Inc. and IBM Corp.USA, 2004.

[18] Wollrath, A. et al. *A distributed Object Model for the Java System*. Proceeding of the USENIX 1996. Toronto, Canada, 1996.

[19] *BRAIN 2025 Report, A Scientific Vision.* U. S. National Institute of Health (NIH). June 2014.

[20] Goth, G. *Critics say Web Services need a REST*. IEEE distributed systems online. Vol. 5. No. 12, 2004.

[21] Sowa, J. *Cognitive Architectures for Conceptual Structures*, Proceedings of ICCS 2011, Heidelberg: Springer, 2011, pp. 35-49.

[22] Roberts, S. on George Boole. *The Isaac Newton of Logic*. The Globe and Mail. March 27, 2004*.* http://www.theglobeandmail.com/life/the-isaac-newton-of-logic/article1129894/?page=1

[23] Von Newmann, J. *First Draft of a Report on the EDVAC*. 1945.

[24] Chen, P. *The Entity-Relationship Model--Toward a Unified View of Data*. In Communications of the ACM, 1(1).1976.

[25] Newell, A. and Simon, H. *Computer Science as Empirical Inquiry: Symbols and Search*. In Communications of the ACM, 19 (3).1976.

[26] Backus, J. *Can* *Programming Be Liberated From the von Neumann Style?* 1977 Turing Award Lecture.

[27] Sowa, J. *Conceptual graphs for a database interface.* IBM Journal of Research and Development, vol. 20, no. 4, pp. 336-357. 1976

[28] Nilsson, N. *The Physical Symbol System Hypothesis: Status and Prospects*. In M. Lungarella, et al., (eds.), 50 Years of AI, Festschrift, LNAI 4850, pp. 9-17, Springer, 2007.

[29] Milner, R. *Elements of interaction*, ACM, 36(1), January 1993.

[30] *Stanford Encyclopedia of Philosophy*. http://plato.stanford.edu/

[31] Tegmark, M. *The Mathematical Universe*. Foundations of Physics 38 (2): 101–150. 2008.

[32] Newell, A.*Unified Theories of Cognition*, Harvard University Press. 1994.

[33] Landauer, R. *The physical nature of information*. Physics Letters A 217, 1996.

[34] Johnson-Laird, P. *Mental models: Towards a cognitive science of language, inference, and consciousness*. 1983.

[35] *Anderson, P. W**. More is different*. Science, August 1972.

[36] Dodig-Crnkovic, G. *Significance of Models of Computation, from Turing Model to Natural Computation*. Minds and Machines, May 2011.

[37] *Chidamber S. and Kemerer C**. A metrics suite for object-oriented design*. IEEE Trans. on Software Engineering, June 1994.

[38] *Basili, V. et at**. A Validation of Object-Oriented Design Metrics as Quality Indicators. *IEEE Trans. on Software Engineering, October 1996.

[39] Rosenberg, L. et al. *Risk-based object oriented testing*. Twenty Fourth Annual Software Engineering Workshop, NASA, 1999

[40] Tegarden, D. et al. *A software complexity model of object-oriented systems.* Decision Support Systems: The International Journal, January 1993.

[41] Lorenz, M. et al. *Object-Oriented software metrics*, Prentice-Hall*.* 1994.

[42] Lie, W. et at. *Object-oriented metrics that predict maintainability*. Journal of Systems and Software. February 1993.

[43] Wooldridge, M. *Multiagent Systems: Introduction (2 ^{nd} Edition)*. John Wiley & Sons. 2009.

[44] Krueger, C. W. *Software reuse.* ACM Computing Surveys, 24(2), June 1992.

[45] Savage, J. E. *Models of Computation*. Addison-Wesley*, *1998.

[46] Nenad Medvidovic and Richard N. Taylor. *Exploiting Architectural Style to Develop a Family of Applications*. October 1997*.*

[47] Application Architecture Guide (Patterns & Practices), Microsoft. September 2009.

[48] Culler, D. et al. *LogP: Towards a Realistic Model of Parallel Computation. *ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, May 1993.

[49] Skillicorn, D. and Talia, D. *Models and Languages for Parallel Computation. *ACM Computing Surveys, June 1998.

*[50] *Xu Liu et al. *Optogenetic stimulation of a hippocampal engram activates fear memory recall*. Nature, April 2012.

[51] Nirenberg, S., Pandarinath, C. *Retinal prosthetic strategy with the capacity to restore normal vision*, Proceedings of the National Academy of Sciences (PNAS), 2012.

[52] *Internet Encyclopedia of Philosophy*. http://www.iep.utm.edu

[53] Fodor, J.* The Language of Thought*, Harvard University Press. 1975.

*Published: 9 June 2017*

**Abstract:**

In animal learning theory, the notion of habits is frequently employed to describe instrumental behaviour that is (among others): inflexible (i.e. slow to change), unconscious, insensitive to reinforcer devaluation (Dickinson 1985, Seger & Spiering 2011). It has also been suggested that learning using reinforcement learning algorithms somewhat reflects a transition from affect-based to more habit-based behaviour (Seger & Spiering 2011) where dual memory systems for affective working memory and standard (e.g. spatial) working memory systems exist (Davidson & Irwin 1999, Watanabe et al. 2007).

Associative Two-Process theory has been proposed to explain phenomena emergent from differential outcomes training. In this procedure, animals (sometimes humans) are presented with stimuli/objects that uniquely identify differential outcomes, e.g. a circle stimulus precedes the presentation of a food outcome, a square stimulus precedes the presentation of a toy outcome. Outcomes are, in turn, mitigated by specific responses, e.g. press the right button to obtain the food, press the left button to obtain the toy. Manipulating these stimuli, response, outcome contingencies reveals the two types of memory, i.e. one that concerns ‘standard’ working memory of stimulus-response associations, the other that concerns ‘prospective’ memory, that stimulus-expectation-response follows in a sequence.

The neural dynamic relationship between the purported dual memory structures may vary depending on the stage of learning at which the animal / human (agent) has arrived at. Previously it has been suggested (Lowe et al. 2014), and neural-computationally demonstrated, that a working memory route is critical in initial learning trials where the agent is presented sequentially with a given stimulus, action/behavioural options, and finally an outcome (e.g. rewarding stimulus or absence thereof). Subsequent trials lead to a dominance of affective (or otherwise prospective) memory that effectively scaffolds the learning of the outcome-achieving stimulus-response rules under conditions of relative uncertainty. Finally, during later stages of learning more ‘habitual’ responding may occur where the retrospective route becomes dominant and ‘overshadows’ the prospective memory.

In neural anatomical terms, candidate structures for implementing prospective memory include the orbitofrontal cortex (OFC), which is considered to enable fast, flexible and context-based learning (particularly important in studies of reversal learning, e.g. Delameter 2007). This is in contrast to the amygdala, which is considered less flexible, i.e. resistant to unlearning, but, nevertheless, critical to learning valuations of stimuli (Schoenbaum et al, 2007). Furthermore, the interplay between the basolateral division of the amygdala (BLA) and OFC may be crucial in differential reward evaluation (Ramirez and Savage, 2007). Passingham and Wise (2012) have suggested that medial prefrontal cortex (PFC) has a critical role in encoding outcome-contingent choice, whereas Watanabe et al (2007) have provided evidence for the lateral PFC integrating activation inputs from ‘retrospective’ (working memory) areas such as dorsal PFC and ‘prospective’ (outcome expectant) areas such as OFC and medial PFC.

A perspective of Urcuioli (2005, 2013) is that outcome expectancies (from prospective memory) provide a means to effectively classify stimuli. Action selection can then be simplified through exploiting affordances of the subset of those actions already associated with the outcome expectancy classes. This is a reason why participants under certain forms of differential outcomes training can immediately select the unique action that leads to the desired outcome even though the stimulus-action (response) contingency has previously not been experienced: Subjects have already classified the stimuli according to a given outcome expectancy previously associated with an action.

In this work, I discuss the associative two-process model in relation to (standard) working memory and ‘affective working memory’ (Watanabe et al. 2007) as providing a means to classify stimuli. I refer to a number of animal learning paradigms that demonstrate the potential for reward and reward omission anticipation to be associated with reward-promoting behaviour (cf. Overmier & Lawry 1979, Kruse & Overmier 1982, Urcuioli 2013, Lowe et al. 2016, Lowe & Billing 2017) and neural computational aspects of the interplay of affective (prospective) and working (retrospective) memory that may yield more habitual behaviour. I show that, within an associative two-process context, habits can also be understood in terms of affective working memory – specifically in relation to reward acquisition expectation and reward omission expectation. Habits, in this context are considered behaviours that are inflexibly selected for in spite of reinforcer devaluation and their rigidity reflects the certainty / uncertainty of a particular rewarding outcome.

I discuss the implications for such learning of habits and affective mediations of behaviour particularly regarding memory and clinical conditions (e.g. alzheimer’s) and learning children. This may be informing of new digitized solutions for intervention approaches with senior citizens and pedagogy in relation to children development.

**References**

** **

Dickinson, A. (1985). Actions and habits: the development of behavioural autonomy. *Philosophical Transactions of the Royal Society of London B: Biological Sciences*, 308(1135), 67-78.

Davidson, R.J and Irwin, W. (1999). The functional neuroanatomy of emotion and affective style. *Trends in Cognitive Neuroscience*, 3: 11-21.

Delamater, A.R. (2007). The role of the orbitofrontal cortex in sensory-specific encoding of associations in pavlovian and instrumental conditioning. *Annals of the New York Academy of Sciences*, 1121(1):152–173

Kruse, J. M., and Overmier, J. B. (1982). Anticipation of reward omission as a cue for choice behavior. *Learning and Motivation*, 13, 505–525.

Lowe, R., Sandamirskaya, Y. and Billing, E. (2014). The actor - differential outcomes critic: A neural dynamic model of prospective overshadowing of retrospective action control. *The Fourth Joint IEEE Conference on Development and Learning and on Epigenetic Robotics*, pp. 440–447.

Lowe, R., Almer, A., Lindblad, G., Gander, P., Michael, J., Vesper, C. (2016) Minimalist social-affective value for use in joint action: A neural-computational hypothesis. *Frontiers in Computational Neuroscience*, 10(88).

Lowe, R. and Billing, E. (2017) Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training, *Adaptive Behavior,* 25 (1), 5-23

Overmier, J. B., & Lawry, J.A. (1979). Pavlovian conditioning and the mediation of behavior. *The Psychology of Learning and Motivation*, 13, 1–55.

Passingham, R. and Wise, S. (2012). *The neurobiology of the prefrontal cortex: anatomy, evolution, and the origin of insight*, vol 50. Oxford University Press.

Ramirez, D. and Savage, L. (2007). Differential involvement of the basolateral amygdala, orbitofrontal cortex, and nucleus accumbens core in the acquisition and use of reward expectancies. *Behavioral neuroscience*, 121(5):896–906.

Schoenbaum, G., Saddoris, M. and Stalnaker, T. (2007) Reconciling the roles of orbitofrontal cortex in reversal learning and the encoding of outcome expectancies. *Annals of the New York Academy of Science*, 1121:320–335.

Seger, C. A. and Spiering, B. J. (2011). A critical review of habit learning and the basal ganglia.* Frontiers in systems neuroscience*, 5.

Urcuioli, P.J. (2005). Behavioral and associative effects of differential outcomes in discriminating learning. *Learning and Behavior,* 33(1):1–21.

Urcuioli, P. (2013). Stimulus control and stimulus class formation. In Madden, G. J., Dube, W. V., Hackenberg, T. D., Hanley, G. P., & Lattal, K. A. (eds), *APA Handbook of Behavior Analysis* (Vol. 1, pp. 361–386). Washington, DC: American Psychological Association.

Watanabe, M., Hikosaka, K., Sakagami, M., & Shirakawa, S. (2007). Reward expectancy-related prefrontal neuronal activities: Are they neural substrates of ‘‘affective’’ working memory? *Cortex*, 43, 53–64.

*Published: 8 June 2017*

**Abstract:**

In his article “What is Information”, Robert Logan explores certain issues related to information on the basis of the connotation of information itself, and puts forward two important theories of "extended mind" and "symbolosphere". Based on the strong-emergence theory, Logan depicts the material emergence and non-material emergence, and proposes a new dualism view, a weak form of dualism. According to this, different from the biosphere, the evolution and reproduction mechanism in symbolosphere do not follow the rules of genetic inheritance, but the mechanisms of memes, belonging to the territory of information study. The new dualism faces the difficulty to correctly explain the ontological position of the symbolosphere, while the philosophy of information provides a standard solution in its theory of human evolution. The evolution of human beings not only contains a physiological inheritance pattern, that is, to follow the single evolution path with DNA genetic characteristics, but also includes psychological activity patterns and behavioral patterns in a three-dimensional way. For human race, the physiological and genetic characteristics will present themselves in the postnatal growth, at the same time, the characteristics of psychological and behavior patterns accumulated in years will also leave “traces” on the inherent genetic vector, which constitutes a new congenital genetic features. It is in this interaction and two-way activities of mutual development and realization between human and nature as well as culture factors that all the content of physiophere, biosphere and symbolosphere and their form achieve a completed, essential and unified integration.

*MOL2NET 2017, International Conference on Multidisciplinary Sciences, 3rd edition (15 February–30 November 2017)*

*Published: 28 May 2017*

**Abstract:**

The mRNA molecules expressed in cow’s milk are important molecular biomarkers for different physiological and pathological conditions in cattle. The prediction of the quantity that a specific mRNA type could be expressed in cow’s milk is a challenging theoretical task. The current study presents for the first time several different Machine Learning models to predict the mRNA expression using the mRNA secondary structure fragments. This unique methodology is based on a dataset of experimental mRNA expression data. Each mRNA molecule has a specific secondary structure represented as a string that can be used to read all the possible mRNA secondary structure fragments. This information is used as input for the Machine Learning methods from Weka software in order to obtain classification models that can predict low, medium and high expression of new mRNA types in the cow’s milk. The mRNA expression levels have been measured with High Throughput Screening techniques. The initial features included the counting of the mRNA secondary structure fragments for each expressed mRNA. The model features were transformed in frequencies and the expression levels were converted into low and high classes. In order to reduce the high number of possible features, a feature selection method has been applied. Thus, the best classification model was obtained with BayesNet method and is based on 24 features and 4067 cases. The model has the true positive rate for the low mRNA expression class of 0.78 (average true positive rate of 0.66). Further studies are needed improve the current results, using datasets with different feature sets and more advanced Machine Learning methods.

*Published: 16 November 2016*

**Abstract:**

Reliable, high-quality water services are a substantial component of a state’s or country’s energy consumption profile. Although the water–energy nexus has received much attention in the past few years, relatively little work has addressed water systems’ energy use, their potential for energy savings, or their empirical results of energy management. This paper surveys the literature on theoretical energy savings in water systems and compares the estimates with the outcomes of numerous case studies where water systems undertook energy efficiency projects and/or programs. The results in practice confirm that the theoretical estimates are indeed achievable; annual energy savings of 10 to 30 percent are typical among water utilities that pursue energy management. These savings come by capital projects, operational changes, and intra-agency inter-agency coordination to deliver water by the most energy-efficient path. Such solutions often help improve hydraulic performance and water quality, showing that energy management is cost-effective, prompt, and synergistic, a critical step in advancing sustainable water supply.

*3rd International Electronic and Flipped Conference on Entropy and Its Applications (1–10 November 2016)*

*Published: 1 November 2016*

**Abstract:**

Both the maximum entropy (MaxEnt) and Bayesian methods update a prior to a posterior probability density function (pdf) by the inclusion of new information in the form of constraints or data respectively. To find the posterior, the MaxEnt method maximizes an entropy function subject to constraints, using the method of Lagrange multipliers, whereas the Bayesian method finds its posterior by multiplying the prior with likelihood functions, in which the measured data are substituted into the appropriate terms. The purpose of this work is to develop a Bayesian method to analyze flow networks and compare it to the MaxEnt method. Flow networks include, among others, water and electrical distribution networks and transport networks. The purpose of using probabilistic methods to model these networks is to predict the flow rates (and other variables) when there is not enough information to model them deterministically, and also to incorporate the effects of uncertainty. After developing the Bayesian method, we show that the Bayesian and Maxent methods obtain the same posterior means but, when the prior is a normal distribution, their covariances are different. The Bayesian method incorporates interactions between variables through the likelihood function. It achieves this through second order or higher cross-terms within the posterior pdf. The MaxEnt method however, incorporates interactions between variables using Lagrange multipliers, avoiding second order correlation terms in the posterior covariance. Therefore, the mean value inferences made by the MaxEnt and Bayesian methods are similar, but the MaxEnt method has a numerical advantage in its integrations, as the correlation terms can be avoided.

*Published: 1 November 2016*

**Abstract:**

Parkinson’s disease (PD) is a neurodegenerative disorder characterized by fibrillar cytoplasmic aggregates of α-synuclein (i.e., Lewy bodies [LB]) and the associated loss of dopaminergic cells in the *substantia nigra. *But, mutations in genes such as α-synuclein (SNCA) account for only 10% of PD occurrences. The exposure to environmental toxicants including pesticides (e.g. paraquat [PQ]) and manganese (Mn), are also recognized as important PD risk factors. Thus, aging, genetic alterations and environmental factors all contribute to the etiology of PD. In fact, both genetic and environmental factors are thought to interact in the promotion of idiopathic PD, but the mechanisms involved are still unclear. In this study, we report a toxic synergistic effect between α-synuclein and either paraquat or Mn treatment. We identified an essential role for central carbon (glucose) metabolism in dopaminergic cell death induced by paraquat or Mn treatment that is enhanced by the overexpression of α-synuclein. PQ “hijacks” the pentose phosphate pathway (PPP) to increase NADPH reducing equivalents and stimulate paraquat redox cycling, oxidative stress, and cell death. PQ also stimulated an increase in glucose uptake, the translocation of glucose transporters to the plasma membrane, and AMPK activation. The overexpression of α-synuclein further stimulated an increase in glucose uptake and AMPK activity, but impaired glucose metabolism. In effect, α-synuclein activity directs additional carbon to the PPP to supply paraquat redox cycling. Alternatively, Mn induces an upregulation in glycolysis and the malate-aspartate shuttle to compensate for energy depletion due to Mn toxicity. Mn treatment causes a decrease in carbon flow through the TCA cycle and a disruption in pyruvate metabolism, which are consistent with a dysfunctional mitochondria and inhibition of pyruvate dehydrogenase. The overexpression of α-synuclein was shown to potentiate Mn toxicity by glycolysis impairment by inhibiting aldolase activity. In effect, α-synuclein overexpression negates the metabolic response to alleviate Mn toxicity that results in an increase in cell death.

*Published: 1 November 2016*

**Abstract:**

Recently, polymer vesicles called polymersomes have emerged as promising nanocarriers. Several studies have reported the formation of poly(ethylene oxide)-*block*-poly(ε-caprolactone) (PEO-*b*-PCL) based vesicles due to their high potential for biomedical applications. However, to our knowledge, the incorporation of ultrasmall superparamagnetic iron oxide nanoparticles (USPIO) into these PEO-*b*-PCL vesicles has not yet been described.

This work reports the self-assemby of PEO_{2000}-*b*-PCL_{12650} copolymers with USPIO. PEO_{2000}-*b*-PCL_{12650} was chosen as amphiphilic copolymer because PEO block is biocompatible and prolong the circulation time of nanoparticles *in vivo* whereas PCL block is biodegradable. Moreover, PEO_{2000}-*b*-PCL_{12000} copolymers have been reported to form vesicles.

USPIO were synthesized by the thermal decomposition method (magnetic core size of 4.2 nm and 7.5 nm) and self-assembly of PEO_{2000}-*b*-PCL_{12650} with USPIO was performed by nanoprecipitation. Polymeric nanoparticles with diameters close to 100 nm and a high USPIO content were formed as shown by dynamic light scattering (DLS), transmission electron microscopy (TEM) and cryo-TEM. These nanoassemblies are characterized by very high r_{2}/r_{1} ratios (at 20 and 60 MHz) which makes them highly promising canditates as T_{2}-contrast agents for magnetic resonance imaging (MRI). The size of USPIO entrapped in PEO-*b*-PCL nanoassemblies has a strong impact on their magnetic properties. Indeed it affects both their longitudinal and their transverse relaxivities and thus their magnetic resonance imaging (MRI) sensitivity.

The next steps in further studies will be the incorporation of an anti-cancer drug into these nanocarriers and the attachment of an active targeting group such as an RGD-containing peptide to their surfaces.