Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 115 Reads
Science, Organization and Sustainability: A Multilevel Approach

Introduction

Some of the most pressing ecological and social problems facing us today are known to arise through interactions between self-organizing processes across two or more different levels in organizational hierarchy. For instance, ecosystem degradation [1] that accompanies socioeconomic development is largely attributable to the interactions between self-organization processes in our socioeconomic organizations and those in underlying ecosystem organizations. Social organizations like families, and communities are known to decline in the course of national and global socioeconomic development at the next higher levels [2]. Similarly self-organizing processes in globalization are known to significantly influence socioeconomic and political developments within nation states [3,4]. A deeper understanding of such multilevel phenomena requires self-organization models that span multiple levels.

While it is clear that understanding and addressing such problems requires multilevel models in self-organization, much of our current scientific research is confined to single research domains. This raises a question - can we develop new research methods that can allow researchers working on single research domains to collectively explore and build models addressing multilevel problems? If this can be done, then it will enable a two-way exchange; where domain level researchers can look at multilevel problems, and new insights from multilevel research can in turn spur domain level research.

Cross-disciplinary research inherently poses many challenges. Researchers from different disciplines can have their own way of framing a research problem [5]. The dimensions of a problem, and the framing of the problem can also vary with changes in the context of the research. This poses a real challenge in integrating research from multiple researchers into a coherent model that can address real life cross-disciplinary problems. Multilevel research in living systems is even more challenging, due to the added complexity of living systems and the multiplicity of coupled levels in such systems.

Methods

In this research, I have attempted to surmount some of these challenges by using an overarching multilevel hypothesis as high-level conceptual scaffolding that serves as a common conceptual frame to integrate research from across research disciplines. Common framing allows research from different disciplines and different levels to be aligned and integrated to build a multilevel model that can reveal new insights that are not observable at the level of any single research domain.

Results and Discussion

This research has produced two multilevel hypotheses [6] that present social self-organization as an extension of a much larger pattern in natural self-organization. They also point to the possibility that a high level organizational similarity could exist between three different networks that functionally modulate resource flows and enable adaption in groups or ecosystems of interacting species at different levels in hierarchy -the Mycorrhizal fungal networks in the soil that modulate resource flows and enable community effects between terrestrial autotrophic species [7–9] , the gut bacterial networks [10,11] that modulate speciation and resource flows in heterotrophic species in ecosystems and financial investment networks (investment markets) [12,13] that modulate resource flows in the socio-economic domain.

In this talk, I outline these developments. These ideas not only present new avenues in cross-disciplinary research, but also open up exciting new possibilities for aligning our socio-economic systems with underlying ecological systems. This research also illustrates how research insights from across multiple domains like linguistics, cognitive science, neural networks and machine learning, energetics, community learning and biology can be integrated to develop multilevel models that can help address some of the most pressing issues in social and ecological sustainability.

References and Notes

  1. WWF The Living Planet Report 2010. Biodiversity, Biocapacity and Development. WWF 2010.
  2. Putnam, R. D. Bowling alone: Americas´s declining social capital. J. Democr. 1995, 6, 65–78.
  3. Sassen, S. Globalization or denationalization? Rev. Int. Polit. Econ. 2003, 10, 1–22.
  4. Sole, J. Globalization. The Human Consequences. Rev. Esp. Cienc. Polit. 1999, 1, 222.
  5. Dewulf, A.; François, G.; Pahl-Wostl, C.; Taillieu, T. A framing approach to cross-disciplinary research collaboration: Experiences from a large-scale research project on adaptive water management. Ecol. Soc. 2007, 12.
  6. Abstracts for the two hypothesis available at: http://www.lifel.org/our_projects.htm
  7. Van Der Heijden, M. G. a; Horton, T. R. Socialism in soil? the importance of mycorrhizal fungal networks for facilitation in natural ecosystems. J. Ecol. 2009, 97, 1139–1150.
  8. Heijden, M. G. a Van Der; Martin, F. M.; Selosse, M.-A.; Sanders, I. R. Mycorrhizal ecology and evolution: the past, the present, and the future. New Phytol. 2015, 205, 1406–1423.
  9. Bonfante, P.; Anca, I.-A. Plants, mycorrhizal fungi, and bacteria: a network of interactions. Annu. Rev. Microbiol. 2009, 63, 363–383.
  10. Norris, V.; Molina, F.; Gewirtzc, A. T. Hypothesis: Bacteria control host appetites. J. Bacteriol. 2013, 195, 411–416.
  11. Brucker, R. M.; Bordenstein, S. R. The hologenomic basis of speciation: gut bacteria cause hybrid lethality in the genus Nasonia. Science (80-. ). 2013, 341, 667–9.
  12. Beck, T.; Levine, R.; Loayza, N. Finance and the sources of growth. J. financ. econ. 2000, 58, 261–300.
  13. Schumpeter, J. The Theory of Economic Development: The Economy as a Whole. In Joseph Alois Schumpeter: Entrepreneurship, Style and Vision; 2003; pp. 61–116.
  • Open access
  • 39 Reads
Enhancing the Social Impact of Contemporary Music with Neurotechnology

Introduction

I am a contemporary classical music composer interested in developing new technologies to aid musical creativity and harness the role of music in social development. After having worked as a research scientist for Sony for a number of years, I moved to Plymouth University in 2003, where I founded the Interdisciplinary Centre for Computer Music Research (ICCMR) to conduct research into these topics. ICCMR is one of the main contributors to the development of a new discipline, which I refer to as Music Neurotechnology [1]. Research into Music Neurotechnology is truly interdisciplinary: it combines musical research with artificial intelligence, bioengineering, neurosciences and medicine. ICCMR’s research outcomes have been published in learned journals of all these fields; for example [2, 3, 4, 5, 6]. This paper introduces one of ICCMR’s most successful projects to date, which demonstrates the social impact of Music Neurotechnology research: the brain-computer music interfacing (BCMI) project. This project is aimed at the development of assistive music technology to enable people with severe physical disabilities to make music controlled with brain signals. In addition to building the technology, I am particularly interested in developing approaches to compose music with it and creating new kinds of contemporary music.

The BCMI Project

Imagine if you could play a musical instrument with signals detected directly from your brain. Would it be possible to generate music representing brain activity? What would the music of our brains sound like? These are some of the questions addressed by Music Neurotechnology research.

I am interested in developing Brain-Computer Interfacing (BCI) technology for music aimed at special needs and music therapy, in particular for people with severe physical disability. A BCI is generally defined as a system that enables direct communication pathways between the brain and a device to be controlled. Currently, the most viable and practical method of scanning brain signals for BCI purposes is to read the brain’s electroencephalogram, abbreviated as EEG, with electrodes placed on the scalp [7] (Figure 1). The EEG expresses the overall electrical activity of millions of neurones, but it is a difficult signal to handle because it is extremely faint, and it is filtered by the meninges (the membranes that separate the cortex from the skull), the skull and the scalp. This signal needs to be amplified significantly and analyzed in order to be of any use for a BCI. In BCI research, it is often assumed that: (a) there is information in the EEG that corresponds to different cognitive tasks, or at least a function of some sort, (b) this information can be detected and (c) users can be trained to produce EEG with such information voluntarily [8].

Figure 1. A BCI system extracts information from the EEG to control devices. (see PDF version for image)

I have coined the term Brain-Computer Music Interface, or BCMI, to refer to a BCI system for music [8]. My research into BCMI is motivated by the extremely limited opportunities for active participation in music making available for people with severe physical disability, despite advances in music technology. For example, severe brain injury, spinal cord injury and locked-in syndrome result in weak, minimal or no active movement, which therefore prevent the use of gesture-based devices. These patient groups are currently either excluded from music recreation and therapy, or are left to engage in a less active manner through listening only.

My collaborators and I have recently developed a BCMI, which we tested with a locked-in syndrome patient at the Royal Hospital for Neuro-disability in London [6]; her condition was caused by a severe stroke (Figure 1, photograph on the right hand side). Our BCMI is based on a neurological phenomenon known as Steady State Visually Evoked Potentials, abbreviated as SSVEP. These are signals that can be detected in the EEG, which are natural responses to visual stimulation at specific frequencies. For instance, when a person looks at various patterns flashing at different frequencies on a computer screen, this shows up in his or her EEG, and a computer can be programmed to infer which pattern he or she is staring at; for instance, the four patterns shown on the screen on the right hand side photograph, Figure 1. We created musical algorithms to translate EEG signals associated with different flashing frequencies into distinct musical processes. For example, looking at one flashing pattern would sound a certain note, looking at another would produce a certain rhythm, staring at another would change its tempo, and so on. The forthcoming full paper will describe this system in detail and will introduce the composition Activating Memory, which I composed with the technical assistance of Joel Eaton, who is currently in the final stages of his doctoral thesis on BCMI at Plymouth University’s ICCMR.

Activating Memory is an unprecedented piece for a string quartet and a BCMI quartet. Each member of the BCMI quartet is furnished with an SSVEP-based BCMI system that enables him or her to generate a musical score in real-time. Each of them generates a part for the string quartet, which is displayed on a computer screen for the respective string performer to sight-read it on the fly during the performance (Figure 2); a short video documentary is available [9].

Concluding Remarks

The technology and the compositional method developed for Activating Memory illustrate the interdisciplinary nature of Music Neurotechnology research and the benefits that such research can bring to humanity. This is an unprecedented piece of music, which is aimed at being much more than mere entertainment or a commodity for the music industry. Here, eight participants can engage in collaborative music making together, where four of them are not required to move. This forms a suitable creative environment for engaging severely physically disabled patients in music making: they are given an active musical voice to playfully interact between themselves and with the musicians of the string quartet. The first public performance of Activating Memory took place in February 2014 at the Peninsula Arts Contemporary Music Festival, Plymouth [10]. Physically disabled patients were not involved in his first performance. Currently, I am working with colleagues back in the hospital to trial the new technology with patients, with the objective of staging a concert performance of Activating Memory with them. On the research front, my team and I are developing techniques to expand the SSVEP approach. We are developing ways to detect EEG patterns related to emotional states to control algorithms that generate music.

Figure 2. For Activating Memory the parts for each string player are generated from the brains of four participants and displayed on a computer screen for sight-reading during the performance. (see PDF version for image)

References

  1. The term ‘Music Neurotechnology’ appeared in print for the first time in 2009 in the editorial of Computer Music Journal, volume 33, number 1, page 1.
  2. Anders, T.; Miranda, E.R. (2009). Interfacing Manual and Machine Composition. Contemporary Music Review 2009, 28(2):133-147.
  3. Miranda, E.R. Emergent Songs by Social Robots, Journal of Experimental and Theoretical Artificial Intelligence 2008, 20(4):319-334.
  4. Daly, I.; Malik, A.; Hwang, F.; Roesch, E.; Weaver, J.; Kirke, A.; Williams, D.; Miranda, E.R.; Nasuto, S.J. Neural Correlates of Emotional Responses to Music: An EEG Study. Neuroscience Letters 2014, 573:2-57.
  5. Miranda, E.R.; Adamatzky, A.; Jones, J. Sounds Synthesis with Slime Mould of Physarum Polycephalum. Journal of Bionic Engineering 2011, 8: 107-113.
  6. Miranda, E.R.; Magee, W.; Wilson, J.J.; Eaton, J.; Palaniappan, R. Brain-Computer Music Interfacing (BCMI): From Basic Research to the Real World of Special Needs. Music and Medicine 2011, 3(3):134-140.
  7. The EEG is a measurement of brainwaves detected using electrodes placed on the scalp. It is measured as the voltage difference between two or more electrodes on the surface of the scalp, one of which is taken as a reference. Other methods for measuring brain activity include MEG (magnetoencephalography), PET (positon emission tomography) and fMRI (functional magnetic resonance imaging), but they are not practical for BCI.
  8. Miranda, E. R.; Castet, J. Eds. Guide to Brain-Computer Music Interfacing. Springer: London, United Kingdom, 2014.
  9. A short documentary is available here: http://vimeo.com/88151780 (assessed on 30/12/2014).
  10. This event was widely reported in the news internationally; for example BBC News: http://www.bbc.co.uk/news/technology-26081451 (assessed on 30/12/2014).
  11. Daly, I.; Malik, A.; Hwang, F.; Roesch, E.; Weaver, J.; Kirke, A.; Williams, D.; Miranda, E.R.; Nasuto, S.J. Neural Correlates of Emotional Responses to Music: An EEG Study. Neuroscience Letters 2014, 573: 52-57.
  • Open access
  • 114 Reads
Algorithmic Imaginaries. Visions and Values in the Co-Production of Search Engine Politics and Europe

Introduction

Information and communication technologies (ICTs) have been described as transcending and transforming national borders, political regimes, and power relations. They have been envisioned as creating a global network society with hubs and links rather than cities and peripheries; “technological zones” (Barry 2006) rather than political territories. This reordering of distance and space was described as going hand in hand with processes of reordering social life. Such deep entanglements of technological and social arrangements have been coined as processes of co-production (Jasanoff 2005). While this “sociotechnical imaginary of the internet” (Felt 2014) was framed as all-encompassing and world-spanning at first, it is now increasingly seen as conflicting with the diversity of cultural, political, and social values on the ground. Accordingly, alternative interpretations of ICTs and their multiple socio-political implications have emerged over the past years.

Especially in the European context, tensions between US-American internet services, Google and its “algorithmic ideology” (Mager 2012, 2014) most importantly, and European visions and values may be observed. After the NSA scandal critical voices have become louder and louder; both in the policy and the public arena. Out of a sudden, issues like privacy, data protection, informational self-determination, and the right to be forgotten have been conceptualized as core European values (even though European secret services heavily surveilled its citizens too – arguable more intensely than the NSA in the British case). This shows that there is a European voice forming that aims at distancing and emancipating Europe from US-American tech companies and their business models based on user-targeted advertising and large-scale citizen surveillance. However, it further shows that there are tensions running through European countries and their national interests, identities and ideologies too. One reason is that Europe is neither a clear-cut, homogeneous entity, nor fixed and stable. In the context of biotechnology policy Jasanoff (2005: 10) argues: “Europe in particular is a multiply imagined community in the minds of the many actors who are struggling to institutionalize their particular versions of Europe, and how far national specificities should become submerged in a single European nationhood – economically, politically, ethically – remains far from settled.”

Methods

So how is Europe imagined in the context of search engine politics and how are search engines imagined in Europe? And how does the European imaginary relate to national visions and values of search engines? These are the main questions to be answered in the presented analysis by taking Austria as a case study. Analyzing European policy discourses the study examines how search engines – Google in particular – are imagined in the European policy context, what visions and values guide search engine politics, and how Europe is constructed in these narratives. Analyzing Austrian media debates the project investigates how the European imaginary is translated into and transformed in the Austrian context, how Google is portrayed in these debates, and what national specificities shape the narratives. A particular focus is put on the ongoing negotiation of the European data protection reform since this is a central arena where search engines (and other data processing technologies like social media etc) and the European identity are co-constructed these days, but also a site where European disparities, national interests, and local value-systems are at stake. Using a discourse analytical approach and the concept of “sociotechnical imaginaries” (Jasanoff and Kim 2009) this study will give insights in the way ICTs and Europe are co-produced, but also what tensions and contradictions appear between the European imaginary and national interests. While European policy documents try to speak with one voice, the Austrian media shows more nuanced stories of power relations, struggles, and friction that open up the view on the fragility of the European identity when it comes to sensitive, value-laden areas like search engine politics.

Results and Discussion

Google is a particularly interesting technology in this respect since Google was one of the first US-American tech companies that came under scrutiny in the European context. In 2010 Google tried to launch its street view service on the European market. Rather than euphorically embracing the service, however, European citizens, NGOs, and policy makers went on the barricades and started protesting against Google cars in various cities and regions. An Austrian farmer, for example, sparked media publicity by attacking a Google car with a pickaxe. After Google’s illegal scraping of open WiFi data Google cars were banned from Austrian streets for some time (not surprisingly the service was continued later on after Google accepted some restrictions). While the street view debate was the first one that had values like privacy and data protection at its core, the issue was handled nationally back then. Every European country took different actions according to their stance towards the service (varying from unrestricted acceptance in some countries to (initial) blockage in others).

Despite these differences among European countries (or also because of them), a European vision – a European “algorithmic imaginary” – started to form in the aftermath of the street view debate. While it was only a silent voice at first, it grew into a stronger message that took its written form in the first draft of the European data protection reform that was launched in early 2012. Since then various actors tried to force their interests into the legislative text – most prominently the US-American IT industry, but also European NGOs and national stakeholders; some of them started lobbying even before the European Commission presented its very first draft. These heavy negotiations show how important this piece of text is for multinational actors doing business on the European market. Even though the reform is far from being finished, the judgment of the “right to be forgotten” that forced Google to obey European law may be seen as a first step towards putting the European imaginary into practice. The Austrian media frames this case as a success in showing US-American IT companies like Google that making business on the European market requires obeying European law. Looking more closely and integrating national visions and values into the analysis, however, indicates how fragile the European imaginary still is, and what tensions and contradictions it faces when being translated into national and local contexts. It shows that Europe tries to speak with a strong voice when addressing other countries and continents, the US most importantly, but how weak its voice becomes when it is confronted with itself. The ongoing reform of the data protection reform offers particularly rich materials to trace this dynamic. It is an arena where search engines, business models, and algorithmic logics are negotiated, but also an arena where Europe is forming and falling apart – both at the same time.

Conclusions

So if our information society is at the crossroads, as stated in the conference abstract, we need to understand tight entanglements between technological and social arrangements before taking the next junction. Only when (re)grounding global ICTs in specific socio-political contexts alternative routes may be taken towards more democratic, more sustainable, and more culturally sensitive network technologies (whether this requires stricter regulations of US-American technologies or developing alternative “European” services, or both, remains to be seen). What we may learn from the geopolitics of search engines in terms of global power relations, European identity construction, and concepts of nationhood will be finally discussed.

Acknowledgments

The research presented in this paper is supported by the Jubilee Fund of  the Austrian National Bank (OeNB), project number 14702.

References and Notes

  1. Barry, A. (2006) Technological Zones. European Journal of Social Theory 9(2): 239-253.
  2. Felt, U. (forthcoming) Sociotechnical imaginaries of “the internet”, digital health information and the making of citizen patients, to appear in Hilgartner S., Miller, C., and Hagendijk, R.: Science and Democracy: Making Knowledge and Making Power in the Biosciences and Beyond, London/ New York: Routledge.
  3. Jasanoff, S. (2005) Designs on Nature. Science and Democracy in Europe and the United States, Oxfordshire: Princeton University Press.
  4. Jasanoff, S. and S. Kim (2009) Containing the Atom: Sociotechnical imaginaries and Nuclear Power in the United States and South Korea, Minerva 47(2): 119-146.
  5. Mager, A. (2012) Algorithmic Ideology. How capitalist society shapes search engines, Information, Communication & Society 15(5): 769-787.
  6. Mager, A. (2014) Defining Algorithmic Ideology: Using Ideology Critique to Scrutinize Corporate Search Engines, Triple C. Cognition, Communication and Co-Operation 12(1).
  • Open access
  • 59 Reads
Embodied Cognition and Information

No bits were destroyed in the making of this paper. This statement is a lie.

This paper explores the nature of information by bringing together two questions posed by Luciano Floridi in his ‘Open problems’ paper [1].

Firstly:

P.4: DGP, the data-grounding problem: How can data acquire their meaning?

(and the later question Can PI [Philosophy of Information] explain how the mind conceptualises reality?)

and

P.11: The MIB (mind-information-body) problem: Can an informational approach solve the mind-body problem?

In the first question, Floridi refers to work by Searle [2] and Mingers [3]. Interestingly, no reference is made to either author for the second question, considering that both have made relevant and interesting contributions to this question specifically.

For Searle, there is no more a mind-body problem than any other dualism we choose to create semantically: it is a construct of concept, nothing more. For Mingers, a similar conclusion is reached by applying a phenomenological approach to Artificial Intelligence: that a disembodied intelligence is a contradiction. In both cases, the mind and body are one – it is not possible to escape acknowledging that thoughts arise from a physical (cognitive) process.

This position has a resonance to Landauer’s [4] view of information as an ‘inevitably’ physical entity; that at some point, all information is embodied. By this view, information remains a Shannon-like definition, a deterministic part of the embodied model where notions of semantic information are (almost) ignored. But simply because information is embodied, it does not follow that information is an object in and of itself, only that what we define it to be such under the condition of its physical embodiment. Once again, what might have been a simple theory is interrupted by semantic conception.

More recent developments in information theory have focused on semantic information and its essentially human construction and use. The Difference that Makes a Difference conferences place this as a specific focus in terms of information in human meaning and value. And this, too arises, form a pragmatic tradition and approach: what is the point, utility, use, purpose of information and how might such explorations help ‘define’ it?

So we have two views of information that both arise from a pragmatic starting point: the first observes simply that information is physically embodied; the second that, at some point, information is defined and construed by people (a necessary part of both the ontology and epistemology). But both views also have another feature that will be used as the focus of this paper: they both make use of embodiment in some way: embodying information or cognition in the physical world.

It is argued that embodiment of information and the embodied mind present a valuable ‘place’ to explore the nature of information. In fact, an inversion to Floridi’s 11th problem is proposed : ‘can an embodied mind shed light on P.1: What is information’?

The paper will present a series of examples of argument construction that use embodiment to explore the nature of information, considering: embodiment itself, embodied cognition and embodied information. Examples of possible approaches are now given for each.

General embodiment

General embodiment can allow more nuanced views and approaches in constructing arguments and ideas. For example, if we begin with Merleau-Ponty’s embodiment in phenomenology [5], we must then accept that the simple model of ‘one thing leading to another’ (as defined semantically and at a certain scale) cannot be used in construction of arguments: embodiment supposes that dualities such as perception and thought are embodied and happen ‘together’ (not necessarily synchronously). This removal of duality has an immediate consequence for the epistemology of semantic information theory but it also offers an interesting speculation environment within which we might pose further questions. To apply this, the ‘semantic scale’ referred to becomes an essential condition under which definitions of both the ontology and epistemology of information are constructed (one recognized by Bell [6] at the quantum scale, for example). But it is the boundary of these scales that hopefully will prove useful in providing a space within which we might explore semantic and deterministic views of information outlined previously (or set limits on what we might know about such reconciliations).

Embodied cognition

Starting with an acceptance of embodied cognition, we have an immediate series of directions to turn in exploring information (see Wilson’s summary presenting six views [7]). For example, considering Lakoff and Johnson’s conceptual metaphors [8,9,10] (cognitive constructs that arise from our embodied understandings of the world around us) provides a starting point and conditions under which we might consider the nature of semantic information. This might be achieved simply (by defining semantic information in terms of embodied cognition) or it may be achieved by considering the processing that takes place in order to transfer, translate or re-apply such metaphors. That is, we are now interested in the transmission of conceptual metaphors as an expression of information. As with general embodiment, this represents a boundary space between the physical process of cognition (an embodied, physical act) and the conceptual process (the use of embodied metaphor as meaningful communication or transmission of information).

Embodied information

Another approach is to approach information as embodied in itself, accept Landauer’s thesis [4], and follow the consequences of this to explore information as embodied in cognition. The interesting question that arises is at what point cognition becomes embodied information. For example, recent research in cognitive neuroscience shows that Broca’s area (the part of the brain generally considered to be a main processing centre for speech) is actually less active whilst we speak [11], suggesting a necessary limit semantically – the translation of our ideas and conceptions to language results in change to those very ideas, as a result of the emergent nature of cognitive processes (for other examples see [12,13,14]). By this argument we have another embodiment of physical information and semantic information but this time in the same series of cognitive processes – both with very different approaches and understandings of information. Once again, it is the ‘in-between’ of these arguments that may provide an interesting space for speculation.

These outlines are given as examples only. The material point here is the construction of conceptual spaces within which further observation or understandings might be made – these are in some way a response to Floridi’s P.16 sub-question, that information is neither here nor there but embodied between here and there. Any or all of these approaches may end in trivial or complex failure. In support of this, Floridi’s points (after Hilbert) about the rigour of method(s) and the value of failure will be adopted.

Above all, this paper will be presented in the spirit of the Difference that makes a Difference – a conversational, collegiate discussion that seeks alternative views that hopefully allow new thoughts on information and its value to emerge.

References and Notes

  1. Floridi, Luciano (2004) ‘Open Problems in the Philosophy of Information.’ Metaphilosophy, 35(4), pp. 554–582. [online] Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9973.2004.00336.x/abstract\nhttp://doi.wiley.com/10.1111/j.1467-9973.2004.00336.x
  2. Searle, John R. (1980) ‘Minds, Brains, and Programs.’ Behavioral and Brain Sciences, 3, pp. 1–19.
  3. Mingers, John (1997) ‘The Nature of Information and Its Relationship to Meaning.’, in Winder, R. L., Probert, S. K., and Beeson, I. A. (eds.), Philosophical Aspects of Information Systems., London, Taylor & Francis. [online] Available from: https://books.google.co.uk/books?id=lsJo2P3QvMQC&pg=PA73&lpg=PA73&dq=The+nature+of+information+and+its+relationship+to+meaning,+john+mingers&source=bl&ots=_aGXQ-NxsV&sig=8usmA_wcxbf_M10fsT6cFNSslxE&hl=en&sa=X&ei=d0PuVP7aJ6bj7Qak9IHIBA&ved=0CDAQ6AEwAQ#v=tw
  4. Landauer, Rolf (1996) ‘The physical nature of information.’ Physics Letters, Section A: General, Atomic and Solid State Physics, 217(July), pp. 188–193.
  5. Merleau-Ponty, M (1962) Phenomenology of perception, London: Routledge & Kegan Paul.
  6. Bell, John S. (1964) ‘On the Einstein Podolsky Rosen Paradox.’ Physics, 1, pp. 195–200.
  7. Wilson, Margaret (2002) ‘Six views of embodied cognition.’ Psychonomic bulletin & review, 9(4), pp. 625–36. [online] Available from: http://www.ncbi.nlm.nih.gov/pubmed/12613670
  8. Lakoff, George and Johnson, Mark (1980) Metaphors We Live by, University of Chicago Press.
  9. Johnson, Mark and Lakoff, George (2002) ‘Why cognitive linguistics requires embodied realism.’ Cognitive Linguistics, 13(3), pp. 245–263.
  10. Lakoff, George and Johnson, Mark (1999) Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought, New York, Basic Books.
  11. Flinker, Adeen, Korzeniewska, Anna, Shestyuk, Avgusta Y., Franaszczuk, Piotr J., et al. (2015) ‘Redefining the role of Broca’s area in speech.’ Proceedings of the National Academy of Sciences. [online] Available from: http://www.pnas.org/content/early/2015/02/09/1414491112.abstract (Accessed 19 February 2015)
  12. Ariely, Dan (2012) The (Honest) Truth About Dishonesty 1st ed., New York, HarperCollins.
  13. Mlodinow, Leonard (2012) Subliminal The New Unconscious and What it Teaches Us 1st ed., London, Penguin Books.
  14. Schooler, Jonathan W and Engstler-Schooler, Tonya Y (1990) ‘Verbal overshadowing of visual memories: Some things are better left unsaid.’ Cognitive Psychology, 22(1), pp. 36–71. [online] Available from: http://linkinghub.elsevier.com/retrieve/pii/001002859090003M
  • Open access
  • 60 Reads
Selforganization of Information and Value - Discussion of the Relation to Physics

Introduction

There is general agreement that mathematicians like Shannon and Kolmogorov and physicists like Maxwell, Boltzmann, Szilard and Stratonovich contributed a lot to our modern understanding of information. Nevertheless there is no general agreement about the relation between physics and information. Is information a concept of physics, can it be reduced to physical terms like entropy. There are many interesting discussions on this problem (see e.g. the FIS-dicussion [1]), but a general agreement is not yet to be seen. This author expressed together with Rainer Feistel the opinion that “information” and the related concept “values” are not physical terms [2,3], inspite of the fact that transfer of information and value is always related to transfer of a universal physical quantity, the entropy. In this way, information is subject to physics but it cannot be reduced to physics alone. In our opinion information-processing systems exist only in the context of life and its descendants: animal behaviour, human sociology, science, technology etc. The historically very first such system was the genetic expression machinery of early life, where DNA triplets were used as symbols for amino acid sequences of proteins to be mounted. However the process, how life appeared on earth, which way symbolic information developed out of non-symbolic, native one, was extremely complicated and is only in part known. We will not discuss this problem here but will leave it to the talk given by Rainer Feistel. Instead we will discuss here also the concept of values, which is also a typical emergent quantity related to physics inspite of the fact that it cannot be reduced to physics [4,5].

Selforganization of information and values

We know that the existence of all living beings is intimately connected with information processing and valuation. This we consider as the central aspect of life. We define a living system as a natural, ordered and information-processing macroscopic system with an evolutionary history. This may be even used as a criterion for decisions of the staff on a space–ship which meet far from our home-planet earth unknown objects moving in space, sending signals and doing maneuvers, should they meet it with the respect for living objects? Information processing we consider as a special high form of self-organization. Information is an emergent property but we see several open problems here. How did information emerge by self-organization? Genuine information is symbolic information, needing a source that creates signals or symbols, a carrier to store or transport it, and finally a receiver that knows about the meaning of the message and transforms it into the structure or function the text is a blueprint for. This way symbolic information is always related to an ultimate purpose connected with valuation.

Information-processing systems exist only in the context of life and its descendants: animal behaviour, human sociology, science, technology etc. To our knowledge, the historically very first such system was the genetic expression machinery of early life, where DNA triplets were used as symbols for amino acid sequences of proteins to be mounted. However the details, how life appeared, which way symbolic information developed out of non-symbolic, native one, are hidden behind the veils of an ancient history. Other, later examples for the self-organization of information are much easier to study, and this was done first by Julian Huxley in the beginning of last century in behaviour biology. The evolutionary process of the transition from use activities of animals to signal activities he called "ritualisation". In our concept the transition to "ritualization" or "symbolization" is a central point. A more detailed view onto this transition process reveals rather general features which we consider as a universal way to information processing [2,3,4]. When a process or a structure becomes symbolized, its original full form of appearance is successively reduced to a representation by symbols, together with a building-up of its processing machinery, which still is capable to react on the symbol as if its complete original was still there. At the end of the transition the physical properties of the symbolic representation are no longer dependent on the physical properties of its origin, and this new symmetry (termed coding invariance) makes drift and diversification possible because of neutral stability. In all processes transferring and processing information, necessarily some amount of entropy is flowing. The argument that the quantity of entropy flow is small, is completely irrelevant. Who believes that the waves on an ocean or the structures on a planet are not of relevance?

What can be said about values, a central concept of biology and humanity, which appears also already in physics. In physics the concept of value was introduced by Wilhelm Ostwald who considered entropy as a measure of the value of energy. In the social sciences the concept of values was first introduced by Adam Smith in the 18th century in an economic context. The fundamental ideas of Adam Smith were worked out later by Ricardo, Marx, Schumpeter and many other economists. In another social context the idea of valuation was used at the turn of the 18th century by Malthus. Parallel to this development in the socio-economic sciences, a similar value concept was developed in the biological sciences by Darwin and Wallace. Sewell Wright developed the idea of a fitness landscape (value landscape) which was subsequently worked out by many authors; in the last years many new results on the structure of landscapes were obtained by Peter Schuster and his coworkers in Vienna. We will explain our concept that the value concept which is irreducible, has including its expressions as biological or ecological fitness or economic values, some background in physics and in particular in entropy.

The formation of information and values is a collective phenomenon and is due to selforganization. We are understanding self-organization as a "process in which individual subunits achieve, through their co-operative interactions, states characterized by new, emergent properties transcending the properties of their constitutive parts." In this respect we would like to stress the role of values, which are indeed among the most relevant emergent properties. An example is the value of a species which means the fitness in the sense of Darwin. Competition is always based on some kind of valuation.

The concepts of values and fitness landscapes are rather abstract and qualitative. Our point of view is that values are an abstract non-physical property of subsystems (species) in certain dynamical context. Values express the essence of biological, ecological, economic or social properties and relations with respect to the dynamics of the system [3,4,5]. From the point of view of modelling and simulations, values are emergent properties. The valuation process is an essential element of the self-organization cycles of evolution.

Conclusions

Information and values are non-physical emergent properties. Both have some roots in physics which are important for understanding. Infromation and valuation were already absolutely central to the origin of life, in order to survive, living creatures needed information and a standard of basic values of food, shelter, protection etc.. Modern societies are based on information processes and an exchange value, money. Both concepts are intimately connected and have roots in physics.

References and Notes

  1. Marijuan, P.C. et al, Foundations of Information Science, fis@listas .unizar.es
  2. Ebeling, W. Feistel, R. Physik der Selbstorganisation und Evolution, 2nd ed, Akademie-Verlag, Berlin 1986
  3. Feistel, R.; Ebeling, W. Physics of Physics of Self-Organization and Evolution, 1st ed.; Wiley-VCH: Weinheim, 2011.
  4. Nicolis, J.S., Bassos, V. Chaos, Information Processing and Praradoxical Games: The Legacy   of J.S. Nicolis, World Scientific, Singapore 2015.
  5. Ebeling, W. Value in Physics and Self-organization, Nature, Society and Thought, 19, 133-143.
  • Open access
  • 88 Reads
UNICE Global Brain Project: "Creating a Global, Independent, Public-Policy Answer-Engine That Will Facilitate Governance, While Preparing for and Reducing the Dangers of Artificial General Intelligence."

UNICE is an acronym for Universal Network of Intelligent Conscious Entities.[1] I coined the term in the 1990s to describe the transformation of our species resulting from a new form of cooperative, intelligent life developed from the hive-like interaction of computers, humans, and future forms of the Internet.[2] Before that happens, and to help ensure that Artificial General Intelligence (AGI) doesn’t accidentally, or intentionally, wipe out the human race, a prudent and realistic goal would be to first develop UNICE as an independent, cognitive-computing tool for governance, protected from the interference of special interests. UNICE, using AI, could help lay the ethical groundwork for the development of AGI. It could also help people all over the world democratically govern themselves by having access to concise and rational analysis based on facts. An existing, not-for-profit, informational website, UNICE.info, launched in 2007, could be developed into a global brain capable of making assessments, judgements, and recommendations based on information gleaned from all available sources. The goal would be to help bring the greatest good to the greatest number, in the most efficient manner, to this and future generations.[3]     Entities seeking special advantage for themselves or their group, company, religion or country will resist this universal guide to good governance. But fairness, and possibly our very survival, dictates that something like UNICE should exist for the benefit of all.

Because UNICE would be edited and written by increasingly sophisticated cognitive AI (which will later transition into AGI) UNICE would be far less subject to the inaccuracies, manipulation and bias found in Wikipedia, or in the thousands of existing research institutes we call “think tanks.” Even as an intelligent zombie (AI without self-aware consciousness), UNICE could become a transparent form of distributed universal governance, protected with publicly accountable checks and balances, performed by the administrators and their democratically elected board of directors. It could also function as a global conscience, monitoring all governments and their actions.

Seed Topic: wiki-UNICE is a proposed site on RationalWiki where any person can write, discuss, elaborate or criticize policy topics. Long before cognitive-UNICE is functional, problems and solutions on various issues will be systematically listed in seed topics that can be copied into collaborative topics for community editing. Wikipedia articles are required to be written in an often tedious, encyclopedic neutral point of view (NPOV), but RationalWiki allows original research, opinion and humor.

Collaborative Topic: A seed topic will be duplicated on the same RationalWiki page and transformed into an editable collaborative topic. It can be modified with anyone willing to follow the goal (bringing the greatest good to the greatest number in the most efficient manner possible) who can also make evidence-based edits. Both before and after cognitive-UNICE is launched, anyone will be able to examine UNICE’s analyses, and provide summaries, criticism and other interactive services in wiki-UNICE.

Cognitive-UNICE: The purpose of cognitive-UNICE would be to create an evidence-based point of view supported by facts, scientific studies, and democratic precedence, within the framework of a simple guideline.[4] The topics and commentary in wiki-UNICE can be the seeds for articles written by cognitive UNICE. As AI and AGI are developed, cognitive-UNICE would grow more sophisticated, responsive and useful. Cognitive-UNICE will study and incorporate wiki-UNICE in order to enhance its interactivity. Concurrent with the development of a public policy wiki, the cognitive part of UNICE could initially begin functioning like IBM’s medical Watson supercomputer, which is fast becoming the world’s best diagnostician. [5] IBM’s Watson Group reports that it will also have a “Public Sector” division whose motto is “helping government help its citizens.”[6]Before these Big Questions are answered, we have some serious business to attend to.

In recent history, our human population has expanded into and exploited every niche. As dreamers, schemers, inventors, warriors, builders, consumers, and breeders, we have been like rapacious caterpillars encircling the Earth in a glistening chrysalis of technology. The outcome of our global metamorphosis is being determined by what we do now. Will this chrysalis be our tomb? Will our web of humanity, along with many other species, be destroyed before we reach our potential, just because we couldnt learn to control our numbers, temper our malevolent urges, or govern ourselves? Perhaps we will be cannibalized by a beast of our own creation because, like us, it will fail to sufficiently respect the lesser creatures or share power equitably. I prefer to think the chrysalis will incubate us to full maturity, and that when the time comes, we will break out of our shell and soar like that most beautiful of small creatures, and touch lightly upon the Earth. UNICE, which will be comprised of all of us working toward a cooperative goal, could help us safely make that transition.

Figure 1. UNICE is depicted as a young, mixed-race woman. She is young because the median age for everyone on Earth is 24 and youth represents openness to new ideas. She is female because cooperation, empathy, sensitivity, tolerance, nurturance, compassion and justice (as in Justitia or Lady Justice) are traditionally considered to be feminine traits. UNICE’s afro represents the interconnected tendrils of the World Wide Web.

(see PDF version for the Figure).

 

References and Notes

  1. Michael E. Arth, “UNICE”, Consciousness Research Abstracts, Journal of Consciousness Studies, 2008, (Toward a Science of Consciousness Conference) Tucson, AZ, USA, p. 151.
  2. Michael E. Arth, “The Future.” In Democracy and the Common Wealth: Breaking the Stranglehold of the Special Interests, Golden Apples Media Inc., USA, 2010, ISBN 978-0-912467-12-2. pp. 438-439
  3. Michael E. Arth, Chapter 1, “Restoring Democracy.” In Democracy and the Common Wealth, Golden Apples Media Inc., USA, 2010, p. 12.
  4. , The guideline, which should also be the goal of politics, is “to bring the greatest good, to the greatest number, in the most efficient manner possible, to this and future generations.”
  5. Lauren F. Friedman, “IBM’s Watson Supercomputer may soon be the best doctor in the world.” Business Insider, April 22, 2014
  6. com website, accessed January 11, 2015
  • Open access
  • 72 Reads
General Systems Theory and Media Ecology: Parallel Disciplines that Inform Each Other

General systems theory (allgemeine systemtheorie) was pursued by a number of thinkers but its origins seems to date back to 1928 and the biological work of Ludwig von Bertalanffy’s PhD thesis. There are many definitions of a general system but in essence a general system is one that is composed of interacting and interrelated components such that and understanding of it must entail considering the general system as a whole and not as a collection of individual components. The behaviour of the individual components of a general system can only be understood in the context of the whole system and not in isolation and hence general system theory is opposed to reductionism whether of a Cartesian or Newtonian origin. As is often the case by taking a systemic approach there are often unintended consequences that an analysis of individual components would yield. General systems theory therefore includes complexity theory, emergent dynamics, cybernetics, control theory, dynamic systems theory, biological ecology, and media ecology. The focus of this essay is to consider the parallels of the different forms of general systems theory with media ecology and consider how they inform each other.

The general systems approach is an ecological approach since an ecosystem is a general system by definition. From a media ecology perspective as first suggested by Marshall McLuhan (1964), the medium is the message. A general system is a medium. Its message is the non-linear interactions of the components of the system. McLuhan wrote, “A new medium is never an addition to an old one, nor does it leave the old one in peace. It never ceases to oppress the older media until it finds new shapes and positions for them (McLuhan 1964, 174).” The same applies to a general system; each element of a general system or ecosystem impacts all the other components of the system. The message of the general system is the dynamics and cross impacts of its components and not the behavior of the individual members of the system. The general system is the unit of analysis. So we might say that the medium is the general system is the message.

General systems theory and cybernetics are intimately related and in a certain sense inform and cross-pollinate each other to such a degree that some regard them as slightly different formulations of the same interdisciplinary practice. One may also include in this mix emergent dynamics or complexity theory as these approaches also consider a system as more than its components with the added feature that they explicitly entail the notion that the supervenient system (i.e. the general system) possesses properties that none of its components possess. In other words, the system as a whole has unintended consequences which an analysis of its components cannot reveal. Emergent dynamics and complexity theory grew out of the general systems approach when computing techniques allowed scientists to deal with non-linear equations and, hence, as a result were able to model general systems in which the interactions among the components of a system were non-linear.

  • Open access
  • 82 Reads
Tools and Their Users

Introduction

The statement that every tool shapes its users’ habits and mentality would apply to any technology in any era of the human history. But today’s advanced ICT devices, which accompany almost all human activities from very early ages onwards, seem to have a qualitatively different impact. Here, I will try to show how these devices –by virtue of a special feature of theirs- have become dangerously entangled with the self-ordained dynamics of market economy and outcome-measurement-based educational policies, eventually jeopardising the raising of next generations of technology developers.

In an earlier article I have searched for the reasons behind a behavioural change that almost suddenly emerged among my students around 2007-08, and tried to show the possible correlation with the penetration of performance-based evaluation into various domains of social life. The problematic change mentioned there was related to the students’ seeking for “a safe haven in imitating machine intelligence, which brought with it submission to externally set targets, strong dependence on external appreciation, insufficient self-confidence, and rapid loss of motivation under failure” [1]. During the many years that followed these symptoms persisted and within the last academic year some unprecedented types of error started to appear in the exam papers and home works, which indicate that some mechanisms that used to compensate the adverse effects of these evaluation policies must have been deactivated or some new mechanisms must have stepped in that aggravate their adverse effects. I suspect that ICT devices are partially responsible for these mechanisms.

Education and Raising the Edifice of Comprehension

In order to analyse the impact of the entanglement of advanced ICT devices, outcome-measurement-based educational policies and the dynamics of market economy on students, let us envisage comprehension as a complex dynamic edifice that rises on the fundament of cognitive abilities, which are extensions of the embodied tacit knowledge. In the course of development (which is strictly speaking a life-long process) each level is supposed to emerge as an abstraction out of the former. Alexandre Borovik [2] describes this process very vividly in the context of mathematics:

“The crystallisation of a mathematical concept (say, of a fraction), in a child’s mind could be like a phase transition in a crystal growing in a rich, saturated—and undisturbed—solution of salt. An “aha!” moment is a sudden jump to another level of abstraction. Such changes in one’s mode of thinking are like a metamorphosis of a caterpillar into a butterfly.”

However, this creative and spontaneous process is highly sensitive to external impacts (particularly systematic ones). Sustainable development of the edifice of comprehension asks for meticulously tuned “boundary conditions”, i.e. external supports and stimuli that have to step in and out with correct timing, catalysing the actualisation of potentials in a way coherent with the lower levels of the edifice as well as with the necessities of the environment. The person-dependent and highly unpredictable developmental process must evolve according to its own pace and cannot be evaluated on basis of standardised criteria. Particularly, outcome-measurement-based evaluation creates a harmful interference with the developmental dynamics, providing a positive bias in favour of the selected measurable criteria, while leaving immeasurable (or at least not immediately assessable, yet for the progress crucial) assets like true comprehension, intuition and motivation unnoticed, unappreciated and eventually letting them fade away.

Nevertheless, this pessimistic picture need not materialise as long as the child and later the student has other opportunities -other than the official education system- to get into touch with life, to receive stimuli and appreciation for the unnoticed assets, as well as pressure to improve his/her unnoticed weaknesses. It is exactly at this point, where the advanced (especially IC) technologies that penetrate children’s lives intensively and at very early ages make a qualitative difference as compared to more traditional ones by blocking the channel of interaction with real life. I suggest that the characteristic feature of contemporary IC technologies responsible for this blockage is the highly-developed and excessively user-friendly interfaces.

Facing the Interface

If we try to apply the concept of “user interface” to a traditional tool, e.g. hammer, the user interface would probably be its handle: the interface between the user and the part of the tool that actually performs the job. Such an interface protects the user from the physical inconveniences of the job, but does not prevent him/her from witnessing the operation. The same could be said about the keyboard of an old-fashioned typewriter where you have a free glance at the operation of the internal mechanism. But contemporary ICT devices are characterised by the “opacity” of their user interfaces. These user-friendly interfaces translate even sophisticated operations into basic sensorimotor tasks like clicking, shifting, dragging and dropping, and into basic cognitive tasks like pattern matching. Never before have technology users been so perfectly “protected” from the complexity of the underlying phenomena and absolved from the exigency of having some comprehension about them. Plotted against the background of market economy, this low intellectual demand on behalf of the users and the low prices of mass-produced consumer devices create a self-amplifying positive feedback loop, creating masses of cheap devices and huge masses of –increasingly younger- customers.

From here onwards, I will refer to user interfaces, which isolate the user from the actual operational level while presenting him/her a virtual face, as isolating interfaces. The notion of interface can even be extended metaphorically, and all devices and supporting technological systems themselves can be considered as interfaces between the user and the real problem.

The complicity of isolating interfaces (both real and metaphorical), outcome-measurement-based education system and market economy is multi-directional:

Market dynamics unleashes masses of cheap ICT gadgets with isolating interfaces upon the plastic brains of young children. While these toys become parents’ favourite means of keeping the children occupied, whole generations are isolated from very early ages onwards from the challenging stimuli of the real world that would have compelled the emergence of new cognitive abilities. On the other hand, these gadgets give children false sense of self-confidence, eventually attract many of them to professions of prospective technology developers and assist their progress even during the first years of their higher education by translating relatively sophisticated tasks into the language of a lower cognitive level, creating in the students the illusion of mastery and autonomy in the respective domains. Meanwhile, the outcome-measurement-based system conceals from the educators students’ lack of true comprehension for a long while. Nevertheless, the illusion of mastery and autonomy can be sustained only until students reach a stage that demands creativity, where most of them start to discern the huge gap of incompetence underneath the seemingly safe ground. This –combined with the high performance pressure and competition- leads not only to the loss of the false self-confidence but also loss of motivation among those who were prematurely attracted to these professions (this typically happens during the third year among my electrical and electronics engineering students). On the other hand, students with high capacity and passion for comprehension are also adversely affected by the outcome-measurement-based evaluation system, which is tailored for isolating interfaces, i.e. does not encourage or appreciate the derivation and deduction of new levels of abstraction from previous ones, but compels learners to take ready-made and level-specific rules for granted. This deprives the students of their developmental autonomy and their natural motivation for learning: the sheer pleasure of “jumping to a higher level of abstraction” and the associated cognitive pride.

Conclusion

Ergonomics demands that devices match the needs of their users and support -rather than impede- their development. But the present combination of pragmatic educational policies, which treats human as automaton, and the market economy, which repudiates the value of human potential, trusts children to cognitively non-ergonomic devices, which turn them into automata and waste their potential long before they have a chance to become developers of future technologies. Nevertheless, this can be considered as nature’s negative feedback that can in the long-run terminate wrong policies.

References

  1. Yagmur Denizhan, Performance-based control of learning agents and self-fulfilling reductionism. Systema 2 no. 2 (2014)
  2. Borovik A. V. Calling a spade a spade: Mathematics in the new pattern of division of labour. arXiv:1407.1954 [math.HO]
  • Open access
  • 25 Reads
The Consistent Principle of Information, Life and Cognition

Introduction

In Shannon’ and Weaver’s classical works, It implies that without selection, there is no information but possible states of world affair. In other words, information begins with selection. (Shannon, C. & Weaver, W., 1964, 9, 31) As there are different pieces of messages to choose, there must be a reason for the consumer of information to select this one rather than others. The reason is that the selected message can afford certain purpose of consumer. In Bateson’s term, it can make a difference to consumer. It is consumer’s selection that gives significance to message. Without consumer, there is no information as there is no significance. Hence, in order to understand significance of information, it is necessary to understand the self-conservation of consumer. I call this claim Consistent Principle. There are two versions of Consistent Principle: weak and strong. 

The representative theory of weak version is John Collier’s dynamic theory of information that “no meaning without intention; no intention without function; no function without autonomy”. (Collier, J., 1999a; 1999b) Strong version is raised by Maturana and Varela in their theory of general biology of life and cognition: “living systems are cognitive systems, and living as a process is a process of cognition” (13); “all doing is knowing, and all knowing is doing” (Maturana, H. & Varela, F., 1987/1992, 26); “living systems are cognitive systems, and to live is to know” (Maturana, H., 1988) In this paper, I will introduce and compare these two versions concluding that weak version is acceptable.

Weak Consistent Principle

I slightly change Collier’s slogan to “no meaning without function; no function without intention; no intention without autonomy” basing on the reason that intention is necessary for function rather that the other way around. Then, Collier’s slogan can be expressed as autonomy is the necessary condition for intention which is the necessary condition for function which is for meaning. In short, information originates from autonomy. First, meaning requires function. Generally speaking, the meaning of a sign is defined as what the symbol refers to or stands for in analytic philosophy. (Lycan, W. G. 2008, 3) With brilliant argument given by Frege (1892/1948), the meaning of a sign is discriminated into referent and sense. Referent is what the sign refers to and sense is the mode of presentation contained in the sign. It seems that there is no position for function in this dual structure of reference or meaning of sign in analytic tradition. Another insight from David Lewis (1970, 19) tells us that we should be careful about the difference between the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world and the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Function and intention may exist in latter description but not former. So, why function is necessary to meaning? 

The term function in Collier’s context is inherited from behaviorism and functionalism that treating living system or brain as “black box” having certain software like computer that giving certain input will get certain output. (Putnam, H., 1991, 73) This idea is wrong as it ignores intentionality which is the very defining nature of living being and mind as Putnam criticized. (Putnam, H., 1991, 74) Therefore, functionalism actually is not about function itself. Hence, in order to draw back original meaning of function, I replace intention with function. In this context, function can be interpreted as the functional apparatus that can realize certain intentional purposes for living being.

If we take the dualistic position of analytic philosophy, it will leads to pan-semiotics that anything can be sign of other things since anything is in some kind of relation with others. Without function, the difference between sense and referent becomes tree without a root or water without a source. In this sense, “a sign that represents without thought might be a representation only in the sense that it can serve as such under the condition that it is so interpreted.” (Collier, J., 1999a) According to Bateson’s definition of information that information is a difference which makes a difference, only a causal process or natural world affair makes a difference for consumer to afford its/his intentional purpose, can it be information for consumer. As sign has intersubjectivity that can be read by any consumers in a same community, the sense or interpretation of a sign or language can be interpreted as proper function.

Intention is the necessary condition for function. Function is supposed to realize certain purpose with apparatus that can manifest the function. Then the question is where the purposes come from. Function is derived from intention. Intention here means that to want to achieve certain purpose or goals. It is living being’s or consumer’s individual will or desire. This is also true for functional organs of living system. Every function requires reference to intention. Intention is always intention of somebody or something living. Different from all physical and self-organization processes, living system has self-agency to maintain its self-identity. The desire of living to conserve itself produces the most primary intention. Hence, autonomy is necessary for intention. 

Strong Consistent Principle

Generally, when investigating cognition, we usually divide the whole situation into cognitive agent which takes cognitive action and the world affair the agent perceive. However, Maturana and Varela find that sensory organs of living system are not passively reflect the world like mirror. Instead, they actively select what they perceive. What the world can be perceived is determined by sensory organs’ perceive capacity. Hence, seeing and hearing as perceptual processes are the actual acting or behaving of eyes and ears in their domain.    

Then Maturana concludes that “A cognitive system is a system whose organization defines a domain of interaction in which it can act with the relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain. Living systems are cognitive systems, and living as process is a process of cognition.” (Maturana, H., 1970/1980, 13) 

In other words, “all doing is knowing, and all knowing is doing”. In other words, living system is necessary and sufficient condition for information and cognition. The reason that past epistemologists cannot acquire this point is because they forget an important prime that “everything said is said by an observer”. Actually, there are two levels of cognition happening in this case: one is the cognitive agent perceiving its surrounding environment and the other is observer observing the whole situation. Past epistemology confuses these two different levels that taking meta-domain as domain of cognitive agent. If we give up observer’s perspective and just take cognitive agent’s, what happens in cognitive process is nothing more than structurally determined process triggered by stimulates of world affair outside on sensory organs. Hence, biological processes of living being are enough to explain cognition and information as living process is knowing process and knowing process is living process.

Why Weak Version?

Compared with weak version, strong version is not as persuaded as it looks like. There are two challenges it faces. The first one is that living system itself is an observer for itself distinguishing itself from its surrounding environment. This is an inner contradiction in the argument of original works of Maturana and Varela. On the one hand, when defining the living system, they says that “the most striking feature of an autopoietic system is that it pulls itself up by its own bootstraps and becomes distinct from its environment through its own dynamics, in such a way that both things are inseparable” (Maturana, H. & Varela, F., 1992, 46-47). Living system is an observer for itself. On the other hand, they say that the distinction between living system and the domain it embodies is made by the meta-observer with the epistemology of “everything said is said by an observer”. However, when they come to the conclusion, the fact that the living system as an observer of itself is overlooked, and inferring that the distinction is made by meta-observer and what happens for living system is nothing more that structurally determined processes. Then, living and cognitive processes are reduced to determined biological processes. If the distinction is not real and we cannot separate living and cognitive processes from other structurally determined processes, then there is no reason to suppose living and cognitive processes are special processes essentially different from other causal processes. Therefore, this biological reductionism is unacceptable as it ignores inner purpose of self-identity of living beings. 

The second challenge strong version faces is that the sign employed by living beings to perceive the world outside and communicate with others is intersubjective and normative. In dealing with the problem of interaction between living system and others, Maturana and Varela explain this process as structural coupling. For living system, any structural changes triggered and selected through its recurrent interactions with others, no matter taking action upon surroundings or communicating with other living beings, are structural coupling in essence. Normally, when explaining language, people divide language into denotation and denoted or even further analyze sign into representamen, interpretant and object as Peirce did. However, this is a classification made by meta-observer, not what actually happen in this process. In this way, Maturana and Varela reduce information and communication into structural coupling which is a biological process essentially. However, it is hard to explain normativity of symbols through this way. Different from actually realized purpose, proper function or purpose of sign is supposed or designed to achieve. It does not mean it can realize the purpose supposed to. In other words, it can be mistake. There is no way for biological reductionism to explain why misinformation is possible. 

With these two challenges above, strong version of Consistent Principle is too strong to interpret information. The problem of it is that it reduces intentional and normative level of information into causal or biological level. Without intention, living system becomes an autonomous machine without soul; without normative level, flexibility of representation capacity of sign is lost and communication is impossible. Weak version of Consistent Principle is necessary for understanding information. Because there is hierarchical structure from living to meaning, so a transdisciplinary framework is needed to explain information with respect to consistent principle.

Acknowledgments

Thanks very much for Prof. Søren Brier’s insightful suggestions. Without him, I can never know such beautiful ideas so clearly.

References

Shannon, C. & Weaver, W., 1964. The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press.

Collier, J., 1996, Information originates in symmetry breaking. Symmetry: Culture & Science 7: 247-256.

Collier, J., 1999a, Autonomy in anticipatory systems: significance for functionality, intentionality and meaning. In Computing Anticipatory Systems, CASYS'98 - Second International Conference, edited by D. M. Dubois, American Institute of Physics, Woodbury, New York, AIP Conference Proceedings 465: 75 81.

Collier, J., 1999b, The Dynamical Basis of Information and the Origins of Semiosis. In Taborsky E., (ed) Semiosis, Evolution, Energy: Towards a Reconceptualization of the Sign. Aachen Shaker Verlag, Bochum Publications in Semiotics New Series. Vol. 3: 111-136.

Godfrey-Smith, P. 1989. Misinformation. Canadian Journal of Philosophy. Vol. 19, No. 4: 533-550.

Kant, I., 1781/1787/1996. Critique of Pure Reason. Translated by Werner S. Pluhar; introduction by

Patricia Kitcher. Hackett Publishing Company, Inc.INDIANAPOLIS/

Lewis, D., 1970. General Semantics, Synthese 22: 18–67.

Lock, J., 1690. An Essay Concerning Human Understanding. Prometheus Books (December 1, 1995)

Lycan, W. G. 2008. Philosophy of Language: a Contemporary Introduction (2ed Edition). New York: Routledge.

Maturana, H. & Varela, F., 1980. Autopoiesis and Cognition: The Realization of the Living. Boston Studies in the Philosophy of Science)

Maturana, H. & Varela, F., 1987/1992, The Tree of knowledge. Biological basis of human understanding. Boston: Shambhala Publications, Inc.

Maturana, H., 1988. Reality: the Search for Objectivity or the Quest for a Compelling Argument. The Irish Journal of Psychology, 9(1), 25-82.

Putnam, H., 1991. Representation and Reality. Cambridge, Mass.: MIT Press.

  • Open access
  • 36 Reads
The Metaphysical Ground of Information Processing

Introduction

Whilst discussing various ethical implications of new technologies that are changing the conditions of communication, it is often forgotten to take into account the foundations of the concepts one is dealing with in practice. The point is that neglecting the metaphysical origin of conceptualization means to open the gates for errors and misunderstandings. Hence, it is the metaphysical ground itself which has to be illuminated in the first place, before being able to actually proceed with developing ethically adequate praxis. How this can be done shall be discussed in this contribution by showing that praxis is already theoretical while theory is always practical.

Results and Discussion

What we find is that on the route to developing a unified conception on systems, we are simultaneously engaged in looking for evolutionary stages of the interaction between energy-mass (matter) on the one hand, and entropy-structure (information) on the other. The generic approach to a unified concept is that of category theory, taking the mathematical origin of systems in Poincaré’s theory (long before this notion has been introduced by Bertalanffy) literally. But at the same time, whilst drafting out a framework for the unified treatment of systems, we also find that a detailed differentiation of the types of systems is necessary in order to understand the primary starting points for the subsequent applications of what has been conceptualized before.

Conclusions

Obviously, there is a multitude of various types of objects for categories that model one or the other          type of system. Essentially, there are two criteria that help to differ among them: The first criterion is the form of organization of a system. This is a concept related to the actual flow of information. We have thus order out of chaos (i.e. information structures emerge in the universe), order out of order (structures form self-replicating structures), and pure information out of order (organisms with minds externalize information, communicating and storing it), respectively.[1]

The second criterion is the degree of complexity inherent in a system: This is what determines the localization of the appropriate level of evolution that can identified with the region associated with an explicit type of order production. The interacting parameters which determine form of organization and complexity, respectively, are energy-mass (matter) and entropy-structure (information) then.

Note that the substrate of all of this is always the same: We can call it primordial matter, possibly based on the initial dynamics of de-coherence as it is known in quantum physics. But we have also to note that in fact, when we are modelling the world, we permanently talk about the world as we can observe it, and as we would like to speculatively model it as a possibility – but we do not encounter the real world after all. So all the time we have to keep this “knowledge gap” in mind (“mind the gap”), in order to realize the ontological distance we cannot actually cover by our methods. This is the reason why we define systems by what we call them rather than by what they are. But while on the route towards a unified concept of systems, it has become obvious that category theory (in the mathematical sense) is not only helpful, but mandatory in the first place.

References

  1. Jonathan Jaquette: Category Theory Pertaining to Dynamical Systems. https://upload.wikimedia.org/wikipedia/commons/4/48/Final_Topics_Paper_on_Catos.pdf
  2. Mike Behrisch et al.: Dynamical Systems in Categories, TU Dresden (2013), http://www.qucosa.de/fileadmin/data/qucosa/documents/12990/PreprintMATH-AL-05-2013.pdf
  3. Jan J. M. M. Rutten: Universal co-algebra: a theory of systems. Theoret. Comput. Sci. 249 (1), 3-80, 2000.

[1] We follow here the ideas of Robert Doyle. See: www.informationphilosopher.com/introduction/information/

Top