Please login first
Next Article in event
A Model of Complexity for the Legal Domain
1  http://www.rug.nl/staff/c.n.j.de.vey.mestdagh/

Abstract:

Dr. C.N.J. de Vey Mestdagh, University of Groningen, the Netherlands, c.n.j.de.vey.mestdagh@rug.nl

Extended abstract , submission date 15-4-2017

The complexity of the universe can only be defined in terms of the complexity of the perceptual apparatus. The simpler the perceptual apparatus the simpler the universe. The most complex perceptual apparatus must conclude that it is alone in its universe.Abstract

The concept of complexity has been neglected in the legal domain. Both as a qualitative concept that could be used to legally and politically analyse and criticize legal proceedings and as a quantitative concept that could be used to compare, rank, plan and optimize these proceedings. In science the opposite is true. Especially in the field of Algorithmic Information Theory (AIT) the concept of complexity has been scrutinized.

In this paper we first have a quick look at AIT to see what it could mean in this phase of our research in the legal domain. We conclude that the there is a difference between problem complexity and solution complexity. In this paper we therefore start to develop a model of complexity by describing problem complexity in the legal domain. We use a formal model of legal knowledge to derive and describe the parameters for the description of the problem complexity of cases represented in this formal model. Further research will focus on refining and extending the formalization of the model of complexity, the comparison of problem and solution complexity for several legal cases using available algorithms and on the validation of the combined model against concrete cases and lawyers’ and legal organizations’ opinions about their complexity.

1.     Complexity in the legal domain

 

The concept of complexity is hardly developed in the legal domain. Most of the descriptions of concepts related to complexity in legal literature refer to vagueness (intension of concepts), open texture (extension of concepts), sophistication (number of elements and relations) and multiplicity of norms (concurring opinions) - in most cases even without explicit reference to the concept of complexity. Complexity arises in all these cases from the existence and competition of alternative perspectives on legal concepts and legal norms.[1] A complex concept or norm from a scientific point of view is not necessarily a complex concept or norm from a legal point of view. If all parties involved agree, i.e. have or choose the same perspective/opinion - there is no legal complexity, i.e. there is no case/the case is solved. In science more exact definitions of complexity are common and applied. Complexity is associated with i.a. uncertainty, improbability and quantified information content. Despite this discrepancy between the legal domain and the domain of science, in the legal domain complexity is as important as in other knowledge domains. Apart from the obvious human interest of acquiring and propagating knowledge per se, complexity has legal, economic, political and psychological importance. Legal, because a coherent concept of complexity helps to analyse and criticize legal proceedings, in order to clarify them, to enable a justified choice of the level of expertise needed to solve legal cases, and to reduce unnecessary complexity (an example of reducing complexity by compression is given in the next paragraph); Economic, because complexity increases costs and measuring complexity is a precondition for the reduction of these costs (can help in designing effective norms, implementing them effectively, calculating and reducing the costs of legal procedures (cf. White, M.J., 1992), planning the settlement of disputes and other legal proceedings, etc.); Political, because legal complexity can be an instrument to exert power and can increase inequality; Psychological, because complexity increases uncertainty. A validated model of complexity in the legal domain can help to promote these interests. (Cf. Schuck, P.H., 1992; Ruhl, J. B., 1996; Kades, E., 1997).

How to develop a model of complexity in the legal domain (methodology)

In this paper we will try to bridge the gap between the intuitive definitions of complexity in the legal domain and the more exact way of defining complexity in science. We will do that on the basis of a formal model of legal knowledge (the Logic of Reasonable Inferences and its extensions) that we introduced before, that was implemented as the algorithm of the computer program Argumentator and that was empirically validated against a multitude of real life legal cases. The ‘complexities’ of these legal cases proved to be adequately represented in the formal model. In earlier research we actually tested the formal model against 430 cases of which 45 were deemed more complex and 385 less complex by lawyers. A first result was that the algorithm (Argumentator) when provided with case facts and legal knowledge was able to solve 42 of the more complex cases and 383 of the 385 less complex cases in exactly the same way as the legal experts did (including the systematic mistakes made by these experts). A second result was that the algorithm when instructed to do so improved the decisions in 30 (66%) of the 45 more complex cases and in 104 (27%$) of the 385 less complex cases. This result confirms the relative complexity of the first 45 cases. The selection of these 45 cases thus provides us with the material from which criteria for the definition of complexity in this paper could be derived. These criteria are translated to quantitative statements about the formal representation of the cases. Further research will focus on the fine tuning of this quantitative model by comparing its results with new empirical data (new cases and opinions of lawyers about the (subjective) complexity of cases). Finally the ability of the fine-tuned model to predict complexity in new cases will be tested. A positive result can be applied to reduce the aforementioned costs of processing of complex legal knowledge.

 

2.     Models of complexity in science

 

There are many different definitions of complexity in science. The aim of this research is to develop a measure of complexity for formal representations of legal knowledge and their algorithmic implementations. In this abstract we will therefore refer to definitions of complexity from Algorithmic Information Theory (AIT), which studies the complexity of data structures (representations of knowledge in a computer). In AIT the complexity of data structures is equated with its information content. Complexity is postulated to decrease proportionate to the degree of (algorithmic) compressibility of the data structure. To assess the usefulness of AIT for our practical purpose, i.e. the design of a quantitative model of complexity of legal knowledge, we studied some publications from the domain of AIT. We read that complexity is approached as Algorithmic Probability (c.f. Solomonoff’s a priori probability), i.e. the higher the probability that a random computer program outputs an object, the less complex this object is considered to be. We read that complexity is approached as Algorithmic Complexity (c.f. Kolmogorov’s descriptive complexity), i.e. the shorter the code needed to describe an object (string), the less complex this object is considered to be. This is an interesting approach since it seems to offer a concrete measure for the complexity of certain objects (e.g. of legal problems) and it associates with the concept of compressibility which we are able to transpose as simplification (as opposed to sophistication) to the legal domain. Finally we read about Dual Complexity Measures (c.f. Burgin, 2006), which relates AIT to more complex problem structures and distinguishes the complexity of the system described (the problem and its solution) from the complexity of the description (the algorithm used to describe the problem and its solution). A common and essential aspect of these approaches is the compressibility of the object as a measure of its complexity. In all these cases the computer program is considered to be an explanation of a (more or less complex) object (or data structure). My conclusion is that these approaches will be useful when trying to prove certain characteristics of the model of complexity in the legal domain, once developed, but not primarily for the design of the model. We will have to describe the formal model and the algorithm (explanation) first. Just to get a practical insight in the concept of compressibility we did apply the idea of compressibility to some legal cases (see example below). However, many of the characteristics of legal cases that make them ‘complex’ according to lawyers are not directly related to compressibility. Moreover, often the most simple ‘palaver’ in the legal domain is meant to be incomprehensible and therefore misses the (semantic and relational) patterns that are needed to be compressible. Our conclusion is that this concept only partially covers the problem in the legal domain. We are eager to discuss this with our colleagues in the mathematical domain.

An example of operand compression using logical equivalence in the legal domain

Objects regulation U.1. appendix III Decree Indication Chemical Waste reads:

‘Waste products are not considered as chemical waste [cw] if they are objects [o] that have attained the waste phase of their lifecycle [wp], unless:

  1. This has happened before they have reached the user [ru];
  2. This has happened after they have reached the user [ru] and they are
    1. transformers .. [1]  .. 10. mercury thermometers. [10]’

De logical structure of this legal provision is:

             not cw is implied by o and wp and not ((not ru) or (ru and (1 or .. or 10)))

Logically equivalent with this formalisation of the provision is the formula:

             not cw is implied by o and wp and ru and not (1. or .. or 10)

 which is a compression of the original provision.

Interestingly enough the retranslation of this equivalent formula to natural language is:

‘Waste products are not considered as chemical waste if they are objects that have attained the waste phase of their lifecycle and they have reached the user and they are not 1. transformers .. 10. mercury thermometers’.

Although this example illustrates that compression can be beneficial because it improves the readability of the regulation, it does not reduce its actual complexity which - in practice - is related to different opinions about the meaning of concepts like ‘Waste products’.

3.     A formal model of legal knowledge (reasonable inferences)

 

The first step in developing a model of complexity in the legal domain is to describe the formal characteristics of legal knowledge that are related to the essence of complexity in this domain, i.e. the competition of opinions. In a previous publication (de Vey Mestdagh and Burgin, 2015) we introduced the following model that allows for reasoning about (mutually exclusive) alternative opinions and that allows for tagging the alternatives, e.g., describing their identity and context:

Our knowledge of the world is always perspective bound and therefore fundamentally inconsistent, even if we agree to a common perspective, because this agreement is necessarily local and temporal due to the human epistemic condition. The natural inconsistency of our knowledge of the world is particularly manifest in the legal domain (de Vey Mestdagh et al., 2011).

In the legal domain, on the object level (that of case facts and opinions about legal subject behavior), alternative (often contradicting) legal positions compete. All of these positions are a result of reasoning about the facts of the case at hand and a selection of preferred behavioral norms presented as legal rules. At the meta-level meta-positions are used to make a choice for one of the competing positions (the solution of an internal conflict of norms, a successful subject negotiation or mediation, a legal judgement). Such a decision based on positions that are inherently local and temporal is by definition also local and temporal itself. The criteria for this choice are in most cases based on legal principles. We call these legal principles metaprinciples because they are used to evaluate the relations between different positions at the object level.

To formalize this natural characteristic of (legal) knowledge we developed the Logic of Reasonable Inferences (LRI, de Vey Mestdagh et al., 1991). The LRI is a logical variety that handles inconsistency by preserving inconsistent positions and their antecedents using as many independent predicate calculi as there are inconsistent positions (Burgin and de Vey Mestdagh, 2011, 2013). The original LRI was implemented and proved to be effective as a model of and a tool for knowledge processing in the legal domain (de Vey Mestdagh, 1998). In order to be able to make inferences about the relations between different positions (e.g. make local and temporal decisions), labels were added to the LRI. In de Vey Mestdagh et al. 2011 formulas and sets of formulas are named and characterized by labelling them in the form (Ai, Hi, Pi, Ci). These labels are used to define and restrict different possible inference relations (Axioms Ai and Hypotheses Hi, i.e. labeled signed formulas and control labels) and to define and restrict the composition of consistent sets of formulas (Positions Pi and Contexts Ci). Formulas labeled Ai must be part of any position and context and therefore are not (allowed to be) inconsistent. Formulas labeled Hi can only be part of the same position or context if they are mutually consistent. A set of formulas labeled Pi represents a position, i.e. a consistent set of formulas including all Axioms (e.g., a perspective on a world, without inferences about that world). A set of formulas labeled Ci represents a context (a maximal set of consistent formulas within the (sub)domain and their justifications, c.f. the world under consideration). All these labels can be used as predicate variables and if individualized to instantiate predicate variables and consequently as constants (variables as named sets). Certain metacharacteristics of formulas and pairs of formulas were finally described by labels (e.g., metapredicates like Valid, Excludes, Prefer) describing some of their legal source characteristics and their legal relations which could be used to rank the different positions externally. The semantics of these three Predicates (Valid, Exclude and Prefer) are described in de Vey Mestdagh et al. 2011. These three predicates describe the elementary relations between legal positions that are prescribed by the most fundamental sets of legal principles (i.e. principles regarding the legal validity of positions, principles regarding the relative exclusivity of legal positions even if they do not contradict each other and principles regarding the preference of one legal position over another). It was also demonstrated that the LRI allows for reasoning about (mutually exclusive) alternatives.

In (de Vey Mestdagh and Burgin, 2015) we showed that labels can be used formally to describe the ranking process of positions and contexts. With that the thus extended LRI allows for local and temporal decisions for a certain alternative, which means without discarding the non-preferred alternatives like belief revision does and without using the mean of all alternatives like probabilistic logics do. This extended the LRI from a logical variety that could be used to formalize the non-explosive inference of inconsistent contexts (opinions) and naming (the elements of) these contexts to a labeled logical variety, in which tentative decisions can be formally represented by using a labelling that allows for expressing the semantics of the aforementioned meta-predicates and prioritizing (priority labelling). In (de Vey Mestdagh and Burgin, 2015) we illustrated the use of these labels by examples.

In the next paragraph we will use the extended LRI to identify the quantitative parameters of complexity in the legal domain.

 

4.     A formal model of the complexity of legal knowledge (parameters for a reasonable calculation of complexity)

 

The processing of legal knowledge takes place in successive phases. Each phase is characterized by its own perspectives and associated parameters of complexity. Roughly, first the different parties in a legal dispute take their positions, then the positions are confronted and a decision is made and finally the decision is presented. The complexity of the dispute differs from phase to phase. Again roughly, from intermediate (the separate positions), to high (the confrontation and decision making), to low (the decision itself). The separate positions are basically consistent and their contents can each be processed within a separate single logical variety. When the dispute starts complexity increases, because the shared axioms of the dispute have to be calculated and the positions are by definition mutually inconsistent and several calculi within the logical variety have to be used to calculate the joint process of the dispute and to decide between different hypotheses within the dispute. Ultimately the decision principles included in the different positions have to be used to rank the different consistent solutions. The dispute ends by presenting the highest ranking consistent (local and temporal) decision, representing a concurring opinion or a compromise. The complexity of this result is reduced again, because it can be (re)calculated within a single consistent variety. Below we will describe these phases in more detail and the related parameters of complexity in terms of the formal model introduced above.

In a certain case the complexity of the case can be quantified on the basis of the case elements and relations presented by all parties. The processing takes place in five phases:

At the start of legal knowledge processing the case can be described as:

  • A number of sets n (the number of parties involved) of labelled formula Hi,l representing the initial positions of each of the parties in a legal discourse, i.e. hypothesesi of partiesl about the (alleged) facts and applicable norms in a legal case;

The next step is:

  • Determining the intersection between these sets Hi,l which defines Ai representing the agreed case facts and norms and determining the union of all complements which defines Hi; (Ai, Hi) represents the initial case description.

The third step is:

  • Calculating all possible minimal consistent positions Pi that can be inferred from (Ai, Hi) applying a logic e.g. the LRI a logical variety that allows each position to be established by its own calculus. If these calculi differ this adds to the complexity of the problem. In earlier publications we assumed all the calculi to be the same (predicate calculus).

The fourth step is:

  • Calculate all maximal consistent contexts (cf. possible consistent worlds) Ci on the basis of (Ai, Hi, Pi).

The last step is

  • Make a ranking of these contexts on the basis of the application of the metanorms (decision criteria) included in them. A formal description and an example of this process are comprised in (de Vey Mestdagh and Burgin, 2015).

Each step in this process is characterized by its own parameters of complexity. In legal practice different procedures are used to determine and handle (reduce) complexity in these different phases.

In the first phase a direct, static measure of complexity is commonly applied. The number of parties and the number of Hypotheses. This is a rough estimate of the number of different positions (interpretations, perspectives, interests).

In the second phase a direct, relative measure of complexity is commonly applied. The number of Ai and its relative size to Hi. The larger the relative size of Ai the less complex a case is considered to be, because there is supposed to be more consensus.

In the third and fourth phases all positions Pi and contexts Ci are derived:

Given the resulting set of labelled formula (Ai, Hi, Pi, Ci) representing the legal knowledge presented in a certain case, the problem complexity of this set can be defined as follows:

  1. The subset Ai (agreed case facts and norms) is by definition included in each Pi and Ci so its inclusion as such is not a measure for complexity as it reflects absolute consent;
  2. The elements of the subset Hi are by definition not included in each Pi and Ci so the relative size of the inclusion of its elements is a measure of complexity as it reflects relative consent. If there is more conformity there is less complexity. It is even possible that certain elements of the subset Hi are not included in any Pi and Ci . The number of these ‘orphaned’ elements can also contribute to the complexity of a case, because they represent antecedents without consequent or consequents without antecedents (a decision is proposed without justification). Orphaned elements can be the result of incompletely presented positions or - worse - be smoke screens;
  3. The relative size of the fraction of subset Ai in (Ai, Hi) - relative to the fraction of Ai in other cases - is a measure of complexity as it reflects the size of shared (consented) knowledge in a legal dispute. This holds even if the size of Ai is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the Ai into consideration.
  4. The relative size of the fraction of subset Hi in (Ai, Hi) - relative to the Hi in other cases - is a measure of complexity as it reflects the size of disputed knowledge in a legal dispute. This holds even if the size of Hi is manipulated by one or more of the parties involved (as a winning strategy or for billing reasons), because the other parties have to take the Hi into consideration.
  5. The relative size of the subset Pi (relative to the Pi in other cases) is a measure of complexity as it reflects the number of different minimal positions that can be taken logically in this specific case. The size of Pi can only be manipulated indirectly (through the respective sizes of Ai and Hi).
  6. The relative size of the subset Ci (relative to the Ci in other cases) is a measure of complexity as it reflects the number of different consistent contexts (possible decisions) that can be distinguished in this specific case.

In the fifth phase ranking of the contexts takes place.

The number of rankings depends on the inclusion of metanorms in the respective contexts. Metanorms that are agreed upon are part of Ai, metanorms that are not agreed upon are part of Hi. The process of applying the metanorms is fully recursive, since the objects of the metanorms are other (meta)norms, which are themselves also part of (Ai, Hi). This means that the determination of the complexity of the application of the metanorms is included in the previous phases. In this phase only the resulting number of rankings is established and can be considered to be an independent measure of complexity.

5.     Validation of the model of complexity

 

The model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph is based on prior theoretical and empirical research into the complexity of legal knowledge (de Vey Mestdagh, 1997, 1998). A total of 430 environmental law cases have been formally represented in the formal model of legal knowledge introduced in paragraph 3 (the extended LRI) and their relative complexity has been established on the basis of legal expert judgements. The opinion of the experts was that 45 cases were of a complex nature and 385 of a less complex (more general) nature. This has been verified by applying an expert system to these cases that was enabled (provided with more complete data and knowledge) to improve on the human judgements in the 430 cases. The test results have shown that in the complex cases 66% of the human judgements were improved by the expert system (of which 20% full revisions), while in the general cases only 27% of the human judgements were improved by the expert system (of which only 2% full revisions). The complex cases are characterized by higher counts of the parameters distinguished in the previous paragraph.

Further validation research is needed to refine the model of parameters for a reasonable calculation of complexity of legal knowledge as described in the previous paragraph. The relative weight of the counts of the parameters described will be varied against the available dataset of legal cases. The results will also be correlated with other variables that are available to gain further insight in possible parameters of complexity. Examples of these variables are: number of submitted documents, length of procedure, number of appeals, spending power of the parties involved, level of expertise of the lawyers involved, etc.

6.     Conclusion and further research

 

In this paper we have explored the concept of complexity in the legal domain. A first conclusion is that the concept has not been studied explicitly in the legal domain. Only indirectly as a qualitative concept (vagueness, open texture, etc.) and hardly ever as a quantitative concept. However, a quantitative model of complexity in the legal domain has - apart from its scientific meaning per se – legal, economic and political implications. It will allow us to improve the quality and efficiency of legal proceedings. Algorithmic Information Theory offers several approaches to the quantification of complexity that inspired the approach chosen in this paper. It induced the thought that a distinction between problem complexity and resolution complexity is necessary and that a model of complexity based on the formal representation of legal knowledge should be the first step in developing a model of complexity in the legal domain. In this paper we give a description of a formal representation of legal knowledge (the extended Logic of Reasonable Inferences) and we describe the quantitative parameters of complexity for this model. The result we would like to call Reasonable Complexity, because it is based on the LRI and because it inherits its relative, perspective bound character. Complexity is specifically relative to the number of perspectives combined in the knowledge under consideration. Further research will focus on extending the model of complexity to resolution complexity, using - amongst others – available algorithms (i.a. Argumentator, a computer program we developed to implement the LRI). It will also use an available dataset of 430 environmental law cases that have been described and analysed before and that have already been represented in Argumentator.

References

Burgin, M.: Super-Recursive Algorithms, Springer Science & Business Media, 2006

Burgin, M., de Vey Mestdagh, C.N.J.: The Representation of Inconsistent Knowledge in Advanced Knowledge Based Systems. In: Andreas Koenig, Andreas Dengel, Knut Hinkelmann, Koichi Kise, Robert J. Howlett, Lakhmi C. Jain (eds.). Knowlege-Based and Intelligent Information and Engineering Systems, vol. 2, pp. 524-537. Springer Verlag, ISBN 978-3-642-23862-8, 2011

Burgin, M., de Vey Mestdagh, C.N.J.: Consistent structuring of inconsistent knowledge. In: J. of Intelligent Information Systems, pp 1-24, , Springer US, September 2013

Dworking, R.: Law's Empire, Cambridge, Mass., Belknap Press, 1986

Hart, H.L.A.: The Concept of Law, New York, Oxford University Press, 1994

Kades, E.: The Laws of Complexity & the Complexity of Laws: The Implications of Computational Complexity Theory for the Law (1997). Faculty Publications. Paper 646. http://scholarship.law.wm.edu/facpubs/646

Ruhl, J. B.: Complexity Theory as a Paradigm for the Dynamical Law-and-Society System: A Wake-UpCall for Legal Reductionism and the Modern Administrative State. Duke Law Journal, Vol. 45, No. 5 (Mar., 1996), pp. 849-928

Schuck, Peter H.: Legal Complexity: Some Causes, Consequences, and Cures. Duke Law Journal, Vol. 42, No. 1 (Oct., 1992), pp. 1-52

Vey Mestdagh, C.N.J. de, Verwaard, W., Hoepman, J.H.: The Logic of Reasonable Inferences. In: Breuker, J.A., Mulder, R.V. de, Hage, J.C. (eds) Legal Knowledge Based Systems, Model-based legal reasoning, Proc. of the 4th annual JURIX Conf. on Legal Knowledge Based Systems, pp. 60-76. Vermande, Lelystad, 1991

Vey Mestdagh, C.N.J. de.: Juridische Kennissystemen, Rekentuig of Rekenmeester?, Het onderbrengen van juridische kennis in een expertsysteem voor het milieuvergunningenrecht (proefschrift), 400 pp., serie Informatica en Recht, nr. 18, Kluwer, Deventer, 1997, ISBN 90 268 3146 3;

Vey Mestdagh, C.N.J. de. Legal Expert Systems. Experts or Expedients? In: Ciampi, C., E. Marinai (eds.), The Law in the Information Society, Conference Proceedings on CD-Rom, Istituto per la documentazione giuridica del Centro Nazionale delle Richerche, Firenze, 2-5 December 1998, 8 pp.

Vey Mestdagh, C.N.J. de, Hoepman, J.H.: Inconsistent Knowledge as a Natural Phenomenon: The Ranking of Reasonable Inferences as a Computational Approach to Naturally Inconsistent (Legal) Theories. In: Dodig-Crnkovic, G. & Burgin, M. (Eds.), Information and Computation (pp. 439-476). New Jersey: World Scientific, 2011

Vey Mestdagh, C.N.J. de, Burgin, M.: Reasoning and Decision Making in an Inconsistent World: Labeled Logical Varieties as a Tool for Inconsistency Robustness. In: R. Neves-Silva, L. C. Jain, & R. J. Howlett (Eds.), Intelligent Decision Technologies. (pp. 411-438). Smart Innovation, Systems and Technologies; Vol. 39. Springer, 2015

White, M.J.: Legal Complexity and Lawyers’ Benefit from Litigation. International Review of Law and Economics (1992) 12, 381-395.

[1] Cf. H.L.A., Hart, who uses the concept of discretion to characterize hard (complex) cases, in The Concept of Law, New York, Oxford University Press, 1994; and R. Dworking, who distinguishes easy from hard cases using the concept of principled interpretation, in Law's Empire, Cambridge, Mass., Belknap Press, 1986; Although fundamentally differing in their opinion about the sources of the decision criteria, they both acknowledge the alternative perspectives that play a role in deciding complex cases (the judge’s discretion in the light of the parties alternative perspectives vs. the judges principled interpretation in the context of the parties alternative perspectives).

Top