1. From Bodies to Bodies
One fundamental aspect of Human-Robot Interactions is the role of the morphologies of both humans and machines. Basically, humans are naturalistically oriented towards the social interaction with other humans, as wrote Aristotle in his classic Politics: “Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god”. Considering it as the long result of an evolutionary process, we can find the several cognitive mechanisms make possible these processes (Adolphs, 2003; Bechtel, 2001; Frith & Frith, 2007; Lieberman, 2012). Some of them, like constantly face-looking patterns allow some biased, like pareidolia or the faces convey primal information for our social life, which make possible to see faces into toasts, rocks or forests (Kato & Mugitani, 2015; Liu et al., 2014).
The constant analysis of morphological aspects is related to mating (Jaffé & Moritz, 2010; Wade, 2010), fly-or-fight responses (Bubic, von Cramon, & Schubotz, 2010), social coordination (Lieberman, 2000) or emotional interaction(Casacuberta & Vallverdú, 2015). This affects primarily the visual (Cavanagh, 2011) and metacognitive processes related to it (Kirsh, 2005), but must be understood as a multidimensional processes which involves several senses. Finally, there is also the influence of cultural values into basic informational sensory processes, as shows the cultural psychologist (Nisbet, 2003).
Taking into account that fact that human morphologies run a social role, and that affection or emotion are fundamental aspects of the eco-cognitive and social processes, I want to remark some important aspects fundamental to be taken into account during the design of good HRI systems and environments.
2. Moral Morphologies as Social Prejudices or Cognitive Bias?
Although 19th Century psychomorphologists or physiognomists like Cesare Lombroso were wrong about the causal relationship between face shape and (usually wrong) moral behaviour, the truth is that human beings tend to correlate some morphologies with moral and/or emotional content (Mazzarello, 2011; Stepanova & Strube, 2009). Here, bad guys are usually dark, angry, with some deformity or extreme trait (big nose, big ears, small head,...), weird cinematic body movement,…like we can find in most of popular cinema and Walt Disney’s villains characters(Gould, 2008). Obviously there are not only biologically determined aspects related to this process, but the role of cultural values must not be undervalued:
Beyond the debates between continuous and categorical models of human caption of emotions, the outstanding fact is that morphology affects how we define the emotional output or even main character of an agent (Martinez & Du, 2012). Therefore, the morphology of the robot is one among a long list of emotional affordances I’ve described elsewhere in previous research (Vallverdu & Trovato, 2016), but at the same time the morphology has an outstanding role because determines a long set of related characteristics of the agent.
3. Emotional Morphologies for HRI
According to the previous data it is obvious that besides of considering the functional design of a robot, several socio-cognitive aspects related to their morphology must be taken into account: gender (Slepian, Weisbuch, Adams, & Ambady, 2011), related language semantics (Gendron, Lindquist, Barsalou, & Barrett, 2012), social context (Hertwig & Herzog, 2009; McHugh, McDonnell, O’Sullivan, & Newell, 2010), body gestures/cinematic (Castellano, Villalba, & Camurri, 2007), among a long list. It is very important for example, that most of previous studies have been related to visual and linguistic HRI interactions, while others extremely important, like touch or olfactory have been almost neglected, basically due to the high complexity of these processes. These aspects are not only basic for a more deep relationship between humans and robots in classic domains (service, military, industrial, care), but also for new ones (like the taboo one of sexual robotics (Levy, 2007), surely one the niches with great expected revenues and implementation according current data on sexual surfing and related interests through the Web and Social Networks). As a conclusion of this section, I must to affirm that the study of the emotional affective aspects embedded into robot morphologies arises as a multidisciplinary research as well as a multidimensional process that goes beyond the basic description of size, shape, colour or texture, requiring more variables: temperature, cinematic speed, temporal flow and adjustment to a naturalistic emotional gestures dynamics, among other ones.
4. The Challenge of Dynamically Augmented Morphologies: Transhumanism or Adaptable robotics.
There is a final idea to be discussed here: human agents are starting to modify severely their cognitive and bodily limits (up to date just as a repairing/prosthetic process or as fashionable gadgets) and this process will modify severely how the natural analysis of morphological phenomenology is performed. At the same time, we can find robots into the market with variable morphologies (combining biped walking with four-legged translation or even wheels; with adjustable body characteristics), something that can confuse the human interacting with the robot. While we do not have a clear control of current morphological aspects involved into HRI, a new set of challenges is in front of us.
REFERENCES
Adolphs, R. (2003). Cognitive neuroscience of human social behaviour. Nature Reviews Neuroscience, 4(3), 165–178. http://doi.org/10.1038/nrn1056
Bechtel, W. (2001). Philosophy and the neurosciences : a reader. Blackwell Pub.
Bubic, A., von Cramon, D. Y., & Schubotz, R. I. (2010). Prediction, cognition and the brain. Frontiers in Human Neuroscience, 4(March), 25. http://doi.org/10.3389/fnhum.2010.00025
Casacuberta, D., & Vallverdú, J. (2015). Emotions and Social Evolution: A Computational Approach. Handbook of Research on Synthesizing Human Emotion in Intelligent Systems and Robotics. http://doi.org/10.4018/978-1-4666-7278-9.ch004
Castellano, G., Villalba, S. D., & Camurri, A. (2007). Recognising Human Emotions from Body Movement and Gesture Dynamics. In Affective Computing and Intelligent Interaction (pp. 71–82). http://doi.org/10.1007/978-3-540-74889-2_7
Cavanagh, P. (2011). Visual cognition. Vision Research. http://doi.org/10.1016/j.visres.2011.01.015
Frith, C. D., & Frith, U. (2007). Social Cognition in Humans. Current Biology. http://doi.org/10.1016/j.cub.2007.05.068
Gendron, M., Lindquist, K. a., Barsalou, L., & Barrett, L. F. (2012). Emotion words shape emotion percepts. Emotion, 12(2), 314–325. http://doi.org/10.1037/a0026007
Gould, S. J. (2008). A Biological Homage to Mickey Mouse. Ecotone, 4(1–2), 333–340. http://doi.org/10.1353/ect.2008.0045
Hertwig, R., & Herzog, S. M. (2009). Fast and Frugal Heuristics: Tools of Social Rationality. Social Cognition, 27(5), 661–698. http://doi.org/10.1521/soco.2009.27.5.661
Jaffé, R., & Moritz, R. F. A. (2010). Mating flights select for symmetry in honeybee drones (Apis mellifera). Naturwissenschaften, 97(3), 337–343. http://doi.org/10.1007/s00114-009-0638-2
Kato, M., & Mugitani, R. (2015). Pareidolia in infants. PLoS ONE, 10(2). http://doi.org/10.1371/journal.pone.0118539
Kirsh, D. (2005). Metacognition , Distributed Cognition and Visual Design. Cognition, Education and Communication Technology, 147–180. http://doi.org/10.4324/9781410612892
Levy, D. (2007). Robot Prostitutes as Alternatives to Human Sex Workers. Proceedings of the IEEE-RAS International Conference on Robotics and Automation, 1–6. http://doi.org/10.1.1.597.7211
Lieberman, M. D. (2000). Intuition: A Social Cognitive Neuroscience Approach. Psychological Bulletin, 126(1), 109–137. http://doi.org/10.1037//0033-2909.126.1.109
Lieberman, M. D. (2012). A geographical history of social cognitive neuroscience. NeuroImage. http://doi.org/10.1016/j.neuroimage.2011.12.089
Liu, J., Li, J., Feng, L., Li, L., Tian, J., & Lee, K. (2014). Seeing Jesus in toast: Neural and behavioral correlates of face pareidolia. Cortex, 53(1), 60–77. http://doi.org/10.1016/j.cortex.2014.01.013
Martinez, A., & Du, S. (2012). A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives. Journal ofMachine Learning Research, 13(2012), 1589–1608. http://doi.org/10.1038/nature13314.A
Mazzarello, P. (2011). Cesare lombroso: An anthropologist between evolution and degeneration. Functional Neurology, 26(2), 97–101.
McHugh, J. E., McDonnell, R., O’Sullivan, C., & Newell, F. N. (2010). Perceiving emotion in crowds: The role of dynamic body postures on the perception of emotion in crowded scenes. In Experimental Brain Research (Vol. 204, pp. 361–372). http://doi.org/10.1007/s00221-009-2037-5
Nisbet, R. E. (2003). The Geography of Thought: How Asians and Westerners Think Differently...and Why: Richard E. Nisbett: 9780743255356: Amazon.com: Books. New York: Free Press (Simon & Schuster, Inc.). Retrieved from https://www.amazon.com/Geography-Thought-Asians-Westerners-Differently/dp/0743255356
Slepian, M. L., Weisbuch, M., Adams, R. B., & Ambady, N. (2011). Gender moderates the relationship between emotion and perceived gaze. Emotion, 11(6), 1439–1444. http://doi.org/10.1037/a0026163
Stepanova, E. V, & Strube, M. J. (2009). Making of a Face: Role of Facial Physiognomy, Skin Tone, and Color Presentation Mode in Evaluations of Racial Typicality. The Journal of Social Psychology, 149(1), 66–81. http://doi.org/10.3200/SOCP.149.1.66-81
Vallverdu, J., & Trovato, G. (2016). Emotional affordances for human-robot interaction. Adaptive Behavior, 1059712316668238. http://doi.org/10.1177/1059712316668238
Wade, T. J. (2010). The Relationships between Symmetry and Attractiveness and Mating Relevant Decisions and Behavior: A review. Symmetry. http://doi.org/10.3390/sym2021081
|
Dr. C.N.J. de Vey Mestdagh, University of Groningen, the Netherlands, [email protected]
Extended abstract , submission date 15-4-2017
The complexity of the universe can only be defined in terms of the complexity of the perceptual apparatus. The simpler the perceptual apparatus the simpler the universe. The most complex perceptual apparatus must conclude that it is alone in its universe.Abstract
The concept of complexity has been neglected in the legal domain. Both as a qualitative concept that could be used to legally and politically analyse and criticize legal proceedings and as a quantitative concept that could be used to compare, rank, plan and optimize these proceedings. In science the opposite is true. Especially in the field of Algorithmic Information Theory (AIT) the concept of complexity has been scrutinized.
In this paper we first have a quick look at AIT to see what it could mean in this phase of our research in the legal domain. We conclude that the there is a difference between problem complexity and solution complexity. In this paper we therefore start to develop a model of complexity by describing problem complexity in the legal domain. We use a formal model of legal knowledge to derive and describe the parameters for the description of the problem complexity of cases represented in this formal model. Further research will focus on refining and extending the formalization of the model of complexity, the comparison of problem and solution complexity for several legal cases using available algorithms and on the validation of the combined model against concrete cases and lawyers’ and legal organizations’ opinions about their complexity.
1. Complexity in the legal domain
The concept of complexity is hardly developed in the legal domain. Most of the descriptions of concepts related to complexity in legal literature refer to vagueness (intension of concepts), open texture (extension of concepts), sophistication (number of elements and relations) and multiplicity of norms (concurring opinions) - in most cases even without explicit reference to the concept of complexity. Complexity arises in all these cases from the existence and competition of alternative perspectives on legal concepts and legal norms.[1] A complex concept or norm from a scientific point of view is not necessarily a complex concept or norm from a legal point of view. If all parties involved agree, i.e. have or choose the same perspective/opinion - there is no legal complexity, i.e. there is no case/the case is solved. In science more exact definitions of complexity are common and applied. Complexity is associated with i.a. uncertainty, improbability and quantified information content. Despite this discrepancy between the legal domain and the domain of science, in the legal domain complexity is as important as in other knowledge domains. Apart from the obvious human interest of acquiring and propagating knowledge per se, complexity has legal, economic, political and psychological importance. Legal, because a coherent concept of complexity helps to analyse and criticize legal proceedings, in order to clarify them, to enable a justified choice of the level of expertise needed to solve legal cases, and to reduce unnecessary complexity (an example of reducing complexity by compression is given in the next paragraph); Economic, because complexity increases costs and measuring complexity is a precondition for the reduction of these costs (can help in designing effective norms, implementing them effectively, calculating and reducing the costs of legal procedures (cf. White, M.J., 1992), planning the settlement of disputes and other legal proceedings, etc.); Political, because legal complexity can be an instrument to exert power and can increase inequality; Psychological, because complexity increases uncertainty. A validated model of complexity in the legal domain can help to promote these interests. (Cf. Schuck, P.H., 1992; Ruhl, J. B., 1996; Kades, E., 1997).
How to develop a model of complexity in the legal domain (methodology)
In this paper we will try to bridge the gap between the intuitive definitions of complexity in the legal domain and the more exact way of defining complexity in science. We will do that on the basis of a formal model of legal knowledge (the Logic of Reasonable Inferences and its extensions) that we introduced before, that was implemented as the algorithm of the computer program Argumentator and that was empirically validated against a multitude of real life legal cases. The ‘complexities’ of these legal cases proved to be adequately represented in the formal model. In earlier research we actually tested the formal model against 430 cases of which 45 were deemed more complex and 385 less complex by lawyers. A first result was that the algorithm (Argumentator) when provided with case facts and legal knowledge was able to solve 42 of the more complex cases and 383 of the 385 less complex cases in exactly the same way as the legal experts did (including the systematic mistakes made by these experts). A second result was that the algorithm when instructed to do so improved the decisions in 30 (66%) of the 45 more complex cases and in 104 (27%$) of the 385 less complex cases. This result confirms the relative complexity of the first 45 cases. The selection of these 45 cases thus provides us with the material from which criteria for the definition of complexity in this paper could be derived. These criteria are translated to quantitative statements about the formal representation of the cases. Further research will focus on the fine tuning of this quantitative model by comparing its results with new empirical data (new cases and opinions of lawyers about the (subjective) complexity of cases). Finally the ability of the fine-tuned model to predict complexity in new cases will be tested. A positive result can be applied to reduce the aforementioned costs of processing of complex legal knowledge.
2. Models of complexity in science
There are many different definitions of complexity in science. The aim of this research is to develop a measure of complexity for formal representations of legal knowledge and their algorithmic implementations. In this abstract we will therefore refer to definitions of complexity from Algorithmic Information Theory (AIT), which studies the complexity of data structures (representations of knowledge in a computer). In AIT the complexity of data structures is equated with its information content. Complexity is postulated to decrease proportionate to the degree of (algorithmic) compressibility of the data structure. To assess the usefulness of AIT for our practical purpose, i.e. the design of a quantitative model of complexity of legal knowledge, we studied some publications from the domain of AIT. We read that complexity is approached as Algorithmic Probability (c.f. Solomonoff’s a priori probability), i.e. the higher the probability that a random computer program outputs an object, the less complex this object is considered to be. We read that complexity is approached as Algorithmic Complexity (c.f. Kolmogorov’s descriptive complexity), i.e. the shorter the code needed to describe an object (string), the less complex this object is considered to be. This is an interesting approach since it seems to offer a concrete measure for the complexity of certain objects (e.g. of legal problems) and it associates with the concept of compressibility which we are able to transpose as simplification (as opposed to sophistication) to the legal domain. Finally we read about Dual Complexity Measures (c.f. Burgin, 2006), which relates AIT to more complex problem structures and distinguishes the complexity of the system described (the problem and its solution) from the complexity of the description (the algorithm used to describe the problem and its solution). A common and essential aspect of these approaches is the compressibility of the object as a measure of its complexity. In all these cases the computer program is considered to be an explanation of a (more or less complex) object (or data structure). My conclusion is that these approaches will be useful when trying to prove certain characteristics of the model of complexity in the legal domain, once developed, but not primarily for the design of the model. We will have to describe the formal model and the algorithm (explanation) first. Just to get a practical insight in the concept of compressibility we did apply the idea of compressibility to some legal cases (see example below). However, many of the characteristics of legal cases that make them ‘complex’ according to lawyers are not directly related to compressibility. Moreover, often the most simple ‘palaver’ in the legal domain is meant to be incomprehensible and therefore misses the (semantic and relational) patterns that are needed to be compressible. Our conclusion is that this concept only partially covers the problem in the legal domain. We are eager to discuss this with our colleagues in the mathematical domain.
An example of operand compression using logical equivalence in the legal domain
Objects regulation U.1. appendix III Decree Indication Chemical Waste reads:
‘Waste products are not considered as chemical waste [cw] if they are objects [o] that have attained the waste phase of their lifecycle [wp], unless:
- This has happened before they have reached the user [ru];
- This has happened after they have reached the user [ru] and they are
- transformers .. [1] .. 10. mercury thermometers. [10]’
De logical structure of this legal provision is:
not cw is implied by o and wp and not ((not ru) or (ru and (1 or .. or 10)))
Logically equivalent with this formalisation of the provision is the formula:
not cw is implied by o and wp and ru and not (1. or .. or 10)
which is a compression of the original provision.
Interestingly enough the retranslation of this equivalent formula to natural language is:
‘Waste products are not considered as chemical waste if they are objects that have attained the waste phase of their lifecycle and they have reached the user and they are not 1. transformers .. 10. mercury thermometers’.
Although this example illustrates that compression can be beneficial because it improves the readability of the regulation, it does not reduce its actual complexity which - in practice - is related to different opinions about the meaning of concepts like ‘Waste products’.
3. A formal model of legal knowledge (reasonable inferences)
The first step in developing a model of complexity in the legal domain is to describe the formal characteristics of legal knowledge that are related to the essence of complexity in this domain, i.e. the competition of opinions. In a previous publication (de Vey Mestdagh and Burgin, 2015) we introduced the following model that allows for reasoning about (mutually exclusive) alternative opinions and that allows for tagging the alternatives, e.g., describing their identity and context:
Our knowledge of the world is always perspective bound and therefore fundamentally inconsistent, even if we agree to a common perspective, because this agreement is necessarily local and temporal due to the human epistemic condition. The natural inconsistency of our knowledge of the world is particularly manifest in the legal domain (de Vey Mestdagh et al., 2011).
In the legal domain, on the object level (that of case facts and opinions about legal subject behavior), alternative (often contradicting) legal positions compete. All of these positions are a result of reasoning about the facts of the case at hand and a selection of preferred behavioral norms presented as legal rules. At the meta-level meta-positions are used to make a choice for one of the competing positions (the solution of an internal conflict of norms, a successful subject negotiation or mediation, a legal judgement). Such a decision based on positions that are inherently local and temporal is by definition also local and temporal itself. The criteria for this choice are in most cases based on legal principles. We call these legal principles metaprinciples because they are used to evaluate the relations between different positions at the object level.
To formalize this natural characteristic of (legal) knowledge we developed the Logic of Reasonable Inferences (LRI, de Vey Mestdagh et al., 1991). The LRI is a logical variety that handles inconsistency by preserving inconsistent positions and their antecedents using as many independent predicate calculi as there are inconsistent positions (Burgin and de Vey Mestdagh, 2011, 2013). The original LRI was implemented and proved to be effective as a model of and a tool for knowledge processing in the legal domain (de Vey Mestdagh, 1998). In order to be able to make inferences about the relations between different positions (e.g. make local and temporal decisions), labels were added to the LRI. In de Vey Mestdagh et al. 2011 formulas and sets of formulas are named and characterized by labelling them in the form (Ai, Hi, Pi, Ci). These labels are used to define and restrict different possible inference relations (Axioms Ai and Hypotheses Hi, i.e. labeled signed formulas and control labels) and to define and restrict the composition of consistent sets of formulas (Positions Pi and Contexts Ci). Formulas labeled Ai must be part of any position and context and therefore are not (allowed to be) inconsistent. Formulas labeled Hi can only be part of the same position or context if they are mutually consistent. A set of formulas labeled Pi represents a position, i.e. a consistent set of formulas including all Axioms (e.g., a perspective on a world, without inferences about that world). A set of formulas labeled Ci represents a context (a maximal set of consistent formulas within the (sub)domain and their justifications, c.f. the world under consideration). All these labels can be used as predicate variables and if individualized to instantiate predicate variables and consequently as constants (variables as named sets). Certain metacharacteristics of formulas and pairs of formulas were finally described by labels (e.g., metapredicates like Valid, Excludes, Prefer) describing some of their legal source characteristics and their legal relations which could be used to rank the different positions externally. The semantics of these three Predicates (Valid, Exclude and Prefer) are described in de Vey Mestdagh et al. 2011. These three predicates describe the elementary relations between legal positions that are prescribed by the most fundamental sets of legal principles (i.e. principles regarding the legal validity of positions, principles regarding the relative exclusivity of legal positions even if they do not contradict each other and principles regarding the preference of one legal position over another). It was also demonstrated that the LRI allows for reasoning about (mutually exclusive) alternatives.
In (de Vey Mestdagh and Burgin, 2015) we showed that labels can be used formally to describe the ranking process of positions and contexts. With that the thus extended LRI allows for local and temporal decisions for a certain alternative, which means without discarding the non-preferred alternatives like belief revision does and without using the mean of all alternatives like probabilistic logics do. This extended the LRI from a logical variety that could be used to formalize the non-explosive inference of inconsistent contexts (opinions) and naming (the elements of) these contexts to a labeled logical variety, in which tentative decisions can be formally represented by using a labelling that allows for expressing the semantics of the aforementioned meta-predicates and prioritizing (priority labelling). In (de Vey Mestdagh and Burgin, 2015) we illustrated the use of these labels by examples.
In the next paragraph we will use the extended LRI to identify the quantitative parameters of complexity in the legal domain.
4. A formal model of the complexity of legal knowledge (parameters for a reasonable calculation of complexity)
The processing of legal knowledge takes place in successive phases. Each phase is characterized by its own perspectives and associated parameters of complexity. Roughly, first the different parties in a legal dispute take their positions, then the positions are confronted and a decision is made and finally the decision is presented. The complexity of the dispute differs from phase to phase. Again roughly, from intermediate (the separate positions), to high (the confrontation and decision making), to low (the decision itself). The separate positions are basically consistent and their contents can each be processed within a separate single logical variety. When the dispute starts complexity increases, because the shared axioms of the dispute have to be calculated and the positions are by definition mutually inconsistent and several calculi within the logical variety have to be used to calculate the joint process of the dispute and to decide between different hypotheses within the dispute. Ultimately the decision principles included in the different positions have to be used to rank the different consistent solutions. The dispute ends by presenting the highest ranking consistent (local and temporal) decision, representing a concurring opinion or a compromise. The complexity of this result is reduced again, because it can be (re)calculated within a single consistent variety. Below we will describe these phases in more detail and the related parameters of complexity in terms of the formal model introduced above.
In a certain case the complexity of the case can be quantified on the basis of the case elements and relations presented by all parties. The processing takes place in five phases:
At the start of legal knowledge processing the case can be described as:
- A number of sets n (the number of parties involved) of labelled formula Hi,l representing the initial positions of each of the parties in a legal discourse, i.e. hypothesesi of partiesl about the (alleged) facts and applicable norms in a legal case;
The next step is:
- Determining the intersection between these sets Hi,l which defines Ai representing the agreed case facts and norms and determining the union of all complements which defines Hi; (Ai, Hi) represents the initial case description.
The third step is:
- Calculating all possible minimal consistent positions Pi that can be inferred from (Ai, Hi) applying a logic e.g. the LRI a logical variety that allows each position to be established by its own calculus. If these calculi differ this adds to the complexity of the problem. In earlier publications we assumed all the calculi to be the same (predicate calculus).
The fourth step is:
- Calculate all maximal consistent contexts (cf. possible consistent worlds) Ci on the basis of (Ai, Hi, Pi).
The last step is
- Make a ranking of these contexts on the basis of the application of the metanorms (decision criteria) included in them. A formal description and an example of this process are comprised in (de Vey Mestdagh and Burgin, 2015).
Each step in this process is characterized by its own parameters of complexity. In legal practice different procedures are used to determine and handle (reduce) complexity in these different phases.
In the first phase a direct, static measure of complexity is commonly applied. The number of parties and the number of Hypotheses. This is a rough estimate of the number of different positions (interpretations, perspectives, interests).
In the second phase a direct, relative measure of complexity is commonly applied. The number of Ai and its relative size to Hi. The larger the relative size of Ai the less complex a case is considered to be, because there is supposed to be more consensus.
In the third and fourth phases all positions Pi and contexts Ci are derived:
Given the resulting set of labelled formula (Ai, Hi, Pi, Ci) representing the legal knowledge presented in a certain case, the problem complexity of this set can be defined as follows:
- The subset Ai (agreed case facts and norms) is by definition included in each Pi and Ci so its inclusion as such is not a measure for complexity as it reflects absolute consent;
- The elements of the subset Hi are by definition not included in each Pi and Ci so the relative size of the inclusion of its elements is a measure of complexity as it reflects relative consent. If there is more conformity there is less complexity. It is even possible that certain elements of the subset Hi are not included in any Pi and Ci . The number of these ‘orphaned’ elements can also contribute to the complexity of a case, because they represent antecedents without consequent or consequents without antecedents (a decision is proposed without justification). Orphaned elements can be the result of incompletely presented positions or - worse - be smoke screens;
- The relative size of the fraction of subset Ai in (Ai, Hi) - relative to the fraction of Ai in other cases - is a measure of complexity as it reflects the size of shared (consented) knowledge in a legal dispute. This holds even if the size of Ai is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the Ai into consideration.
- The relative size of the fraction of subset Hi in (Ai, Hi) - relative to the Hi in other cases - is a measure of complexity as it reflects the size of disputed knowledge in a legal dispute. This holds even if the size of Hi is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the Hi into consideration.
- The relative size of the subset Pi (relative to the Pi in other cases) is a measure of complexity as it reflects the number of different minimal positions that can be taken logically in this specific case. The size of Pi can only be manipulated indirectly (through the respective sizes of Ai and Hi).
- The relative size of the subset Ci (relative to the Ci in other cases) is a measure of complexity as it reflects the number of different consistent contexts (possible decisions) that can be distinguished in this specific case.
In the fifth phase ranking of the contexts takes place.
The number of rankings depends on the inclusion of metanorms in the respective contexts. Metanorms that are agreed upon are part of Ai, metanorms that are not agreed upon are part of Hi. The process of applying the metanorms is fully recursive, since the objects of the metanorms are other (meta)norms, which are themselves also part of (Ai, Hi). This means that the determination of the complexity of the application of the metanorms is included in the previous phases. In this phase only the resulting number of rankings is established and can be considered to be an independent measure of complexity.
5. Validation of the model of complexity
The model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph is based on prior theoretical and empirical research into the complexity of legal knowledge (de Vey Mestdagh, 1997, 1998). A total of 430 environmental law cases have been formally represented in the formal model of legal knowledge introduced in paragraph 3 (the extended LRI) and their relative complexity has been established on the basis of legal expert judgements. The opinion of the experts was that 45 cases were of a complex nature and 385 of a less complex (more general) nature. This has been verified by applying an expert system to these cases that was enabled (provided with more complete data and knowledge) to improve on the human judgements in the 430 cases. The test results have shown that in the complex cases 66% of the human judgements were improved by the expert system (of which 20% full revisions), while in the general cases only 27% of the human judgements were improved by the expert system (of which only 2% full revisions). The complex cases are characterized by higher counts of the parameters distinguished in the previous paragraph.
Further validation research is needed to refine the model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph. The relative weight of the counts of the parameters described will be varied against the available dataset of legal cases. The results will also be correlated with other variables that are available to gain further insight in possible parameters of complexity. Examples of these variables are: number of submitted documents, length of procedure, number of appeals, spending power of the parties involved, level of expertise of the lawyers involved, etc.
6. Conclusion and further research
In this paper we have explored the concept of complexity in the legal domain. A first conclusion is that the concept has not been studied explicitly in the legal domain. Only indirectly as a qualitative concept (vagueness, open texture, etc.) and hardly ever as a quantitative concept. However, a quantitative model of complexity in the legal domain has - apart from its scientific meaning per se – legal, economic and political implications. It will allow us to improve the quality and efficiency of legal proceedings. Algorithmic Information Theory offers several approaches to the quantification of complexity that inspired the approach chosen in this paper. It induced the thought that a distinction between problem complexity and resolution complexity is necessary and that a model of complexity based on the formal representation of legal knowledge should be the first step in developing a model of complexity in the legal domain. In this paper we give a description of a formal representation of legal knowledge (the extended Logic of Reasonable Inferences) and we describe the quantitative parameters of complexity for this model. The result we would like to call Reasonable Complexity, because it is based on the LRI and because it inherits its relative, perspective bound character. Complexity is specifically relative to the number of perspectives combined in the knowledge under consideration. Further research will focus on extending the model of complexity to resolution complexity, using - amongst others – available algorithms (i.a. Argumentator, a computer program we developed to implement the LRI). It will also use an available dataset of 430 environmental law cases that have been described and analysed before and that have already been represented in Argumentator.
References
Burgin, M.: Super-Recursive Algorithms, Springer Science & Business Media, 2006
Burgin, M., de Vey Mestdagh, C.N.J.: The Representation of Inconsistent Knowledge in Advanced Knowledge Based Systems. In: Andreas Koenig, Andreas Dengel, Knut Hinkelmann, Koichi Kise, Robert J. Howlett, Lakhmi C. Jain (eds.). Knowlege-Based and Intelligent Information and Engineering Systems, vol. 2, pp. 524-537. Springer Verlag, ISBN 978-3-642-23862-8, 2011
Burgin, M., de Vey Mestdagh, C.N.J.: Consistent structuring of inconsistent knowledge. In: J. of Intelligent Information Systems, pp 1-24, , Springer US, September 2013
Dworking, R.: Law's Empire, Cambridge, Mass., Belknap Press, 1986
Hart, H.L.A.: The Concept of Law, New York, Oxford University Press, 1994
Kades, E.: The Laws of Complexity & the Complexity of Laws: The Implications of Computational Complexity Theory for the Law (1997). Faculty Publications. Paper 646. http://scholarship.law.wm.edu/facpubs/646
Ruhl, J. B.: Complexity Theory as a Paradigm for the Dynamical Law-and-Society System: A Wake-UpCall for Legal Reductionism and the Modern Administrative State. Duke Law Journal, Vol. 45, No. 5 (Mar., 1996), pp. 849-928
Schuck, Peter H.: Legal Complexity: Some Causes, Consequences, and Cures. Duke Law Journal, Vol. 42, No. 1 (Oct., 1992), pp. 1-52
Vey Mestdagh, C.N.J. de, Verwaard, W., Hoepman, J.H.: The Logic of Reasonable Inferences. In: Breuker, J.A., Mulder, R.V. de, Hage, J.C. (eds) Legal Knowledge Based Systems, Model-based legal reasoning, Proc. of the 4th annual JURIX Conf. on Legal Knowledge Based Systems, pp. 60-76. Vermande, Lelystad, 1991
Vey Mestdagh, C.N.J. de.: Juridische Kennissystemen, Rekentuig of Rekenmeester?, Het onderbrengen van juridische kennis in een expertsysteem voor het milieuvergunningenrecht (proefschrift), 400 pp., serie Informatica en Recht, nr. 18, Kluwer, Deventer, 1997, ISBN 90 268 3146 3;
Vey Mestdagh, C.N.J. de. Legal Expert Systems. Experts or Expedients? In: Ciampi, C., E. Marinai (eds.), The Law in the Information Society, Conference Proceedings on CD-Rom, Istituto per la documentazione giuridica del Centro Nazionale delle Richerche, Firenze, 2-5 December 1998, 8 pp.
Vey Mestdagh, C.N.J. de, Hoepman, J.H.: Inconsistent Knowledge as a Natural Phenomenon: The Ranking of Reasonable Inferences as a Computational Approach to Naturally Inconsistent (Legal) Theories. In: Dodig-Crnkovic, G. & Burgin, M. (Eds.), Information and Computation (pp. 439-476). New Jersey: World Scientific, 2011
Vey Mestdagh, C.N.J. de, Burgin, M.: Reasoning and Decision Making in an Inconsistent World: Labeled Logical Varieties as a Tool for Inconsistency Robustness. In: R. Neves-Silva, L. C. Jain, & R. J. Howlett (Eds.), Intelligent Decision Technologies. (pp. 411-438). Smart Innovation, Systems and Technologies; Vol. 39. Springer, 2015
White, M.J.: Legal Complexity and Lawyers’ Benefit from Litigation. International Review of Law and Economics (1992) 12, 381-395.
[1] Cf. H.L.A., Hart, who uses the concept of discretion to characterize hard (complex) cases, in The Concept of Law, New York, Oxford University Press, 1994; and R. Dworking, who distinguishes easy from hard cases using the concept of principled interpretation, in Law's Empire, Cambridge, Mass., Belknap Press, 1986; Although fundamentally differing in their opinion about the sources of the decision criteria, they both acknowledge the alternative perspectives that play a role in deciding complex cases (the judge’s discretion in the light of the parties alternative perspectives vs. the judges principled interpretation in the context of the parties alternative perspectives).
|