Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 82 Reads
Techno-Politics as Network(ed) Struggles

Introduction

At least since the NSA disclosures of 2013th “Summer of Surveillance”, internet surveillance and informational privacy and security have received widespread public attention and become a political concern for many. Taking the disclosures as a starting point, I follow up on this development and inquire into the techno-politics of surveillance and counter-surveillance. Instead of focusing on regulation applied to technological practices from outside, I investigate the socio-political dimensions of the internet infrastructure itself and the politics of concrete technological surveillance and counter-surveillance practices. I show how data infrastructures are not only regulated through policy, but can function as techno-political means which bring about a specific socio-technical structure. My question is: How do surveillance and counter-surveillance technologies operate as a form of techno-politics within the internet infrastructure? The answer to this question can enhance our understanding of the impact on the political landscape, which ubiquitous information technologies and their steady diffusion into every realm of our lives have.

Technological infrastructures and networks are of central importance to my research, as contemporary ICTs and ICT surveillance technologies operate in and through networks rather than as single artifacts. The network, one of the 21st century’s most prominent entities, is both a potential threat and a potential point of control. Cumbers, Routledge and Nativel argue that “it is becoming increasingly difficult for ruling elites, usually located at the national scale, to play the gatekeeper role, through traditional territorialized hierarchies, with regard to information and communication flows across space” (Cumbers, Routledge & Nativel, 2008, p. 188). To exercise control then requires an “‘empire’ based upon a decentred and deterritorializing apparatus of rule that progressively incorporates the entire global realm” (Cumbers, Routledge & Nativel, 2008, p. 185). At the same time, networks have the tendency “to create hubs as these provide more stability and robustness. Hubs establish a kind of ‘hierarchy’ within networks and this in turn gives a certain advantage to key positions of players’” (Cumbers, Routledge & Nativel, 2008, p. 189). In my research I explore how surveillance technologies exploit the internet’s inherent hierarchies and operate through the global hubs that emerged within the infrastructure. Counter-surveillance technologies try to sabotage the centralized surveillance network this establishes. By using encryption technologies, they aim to make hubs dysfunctional for surveillance and to strengthen non-hierarchical network features. Consequently, the two antagonists are opposed in the way they use the network and are involved in a struggle over the network’s very structure and technological design.

Methods

I base my framework on pragmatist John Dewey’s approach to the relation between politics and infrastructures (Dewey, 1927) and extend it by analyzing the actual technological internet infrastructure. Susan Leigh Star and Geoffrey C. Bowker’s work on infrastructures and Alexander Galloway’s description of different network topologies to be found within the internet provide my basis for this analysis (Star & Bowker, 2006; Galloway, 2004). It builds the foundation for understanding surveillance and counter-surveillance technologies’ operation in and on the internet and its political dimensions. In Dewey’s political thought, technological infrastructures play a major role because he held politics to be concerned with governing the channels of human interactions, of which technological infrastructures are an essential part (Dewey, 1927, p. 30). Through these channels, people can purposefully organize within society, interact through networks of communication and collaboration, and engage in joint endeavors. Technologies become the means and ends of their purpose-directed activities and signify “the intelligent techniques by which the energies of nature and man [sic] are directed and used in satisfaction of human needs” (Hickman, 2001, p. 8). Politics exercise indirect control over people’s behavior through governing technological channels and regulating infrastructural systems. It is through these systems that interactions amongst society’s members propagate and actions translate into consequences through transmission over several instances.

Even though Dewey recognized their political importance, he did not analyze infrastructures in detail. According to Susan Leigh Star and Geoffrey C. Bowker, infrastructures are that “upon which something else rides, or works” (Star & Bowker, 2006, p. 230). As the technological structures that enable social phenomena, they are always underneath – transparent, invisible and embedded. Once in place, infrastructures only call for active investigation and attention when conditions of usability are altered and smooth use is prevented; otherwise they remain outside our awareness and active experiences. Because they organize flows of exchange within socio-technical complexes, infrastructures can be understood as the technological ordering of things. They consist of a plurality of technologies, agents and sub-networks and their actual configuration is contingent and dependent on implementation. Every configuration “represents only one of a number of possible contributions of tasks and properties between hardware, software and people” (Star & Bowker, 2006, p. 234). Network diagrams describe the structural features of different configurations and visualize their inherent distribution of power and control. To describe the control structures within the internet infrastructure, Alexander Galloway uses three different network types: the distributed, the decentralized and the centralized network (Galloway, 2004, pp. 11-12 & pp. 30 ff.). The centralized network is a hierarchical network in which one central host wields power over subordinate nodes. The decentralized network is then the conjunction of several centralized networks and consists of multiple hosts which rule over their sub-set of nodes. In both networks, information flows one-directionally from the host(s) to the nodes. A distributed network on the other hand does not have a hierarchical order, but every node is an equally autonomous agent and can communicate with any other node peer-to-peer. Now, when it comes to surveillance, the centralized network is easiest, since all flows must pass through the central hub. To surveil a decentralized network, multiple host need to be intercepted, because information does not accumulate in one place. In a distributed network, surveillance is most complicated. Here, in order to access every information flow within the network, all nodes (network participants) must be monitored.

Figure 1. The centralized, decentralized and distributed network diagram.

(see PDF version for the Figure).

 

Results and Discussion

Within the internet infrastructure’s different technological layers, we can find both distributed and (de)centralized network topologies. On the one hand, there is what I call the internet’s “physical layer”. This layer transmits actual data signals and consists of devices, cable networks, routers, servers, etc. When looking at its global constitution, we can see that this physical layer resembles a decentralized network. Across the globe, there are a number of major internet exchange points (IXPs). These are operated by internet providers like AT&T and most are located in the United States and Europe, for example in London, Frankfurt, Paris and New York (Figure 2). Nearly all internet traffic needs to pass one of them in order to get forwarded to its destination. Consequently, the IXPs build central internet hubs. Global (undersea) cable networks support this, because cables with the greatest bandwidth connect to these IXPs (TeleGeography, 2014). As it is cheapest to route through high bandwidth, data often does not take the geographically shortest path. Instead, it is linked through different high bandwidth cables across the globe and most likely across the United States. Therefore it is not surprising that NSA surveillance technologies exploit the decentralized structure of the physical layer (The Guardian, 2013). As most global hubs are are located on US soil or on the soil of US allies, the NSA can gain access to global information flows and retrieve data doubles secretly. One example for how this is done is Room 641A in AT&Ts office in San Francisco. According to former technician Mark Klein, the NSA had installed a splitter device in the office’s internet room, which is basically an IXP (Klein, 2007). From this splitter, it directs copies of all passing internet traffic to its secret room, where the data is analyzed with latest technology. From such interception points then, the NSA feeds the data into its own network and data center. This creates a centralized shadow-network on top of the actual internet infrastructure, in which the NSA is the central hub. From this position it can monitor information flows and oversee the whole network, but peripheral network participants remain unaware. Moreover, it is potentially able to manipulate data flows, as has been the case with the program Quantumtheory (Spiegel Online, 2013).

Figure 2. Global internet routes in 2012: © Copyright 2014 PriMetrica, Inc., retrieved from http://www.telegeography.com/telecom-resources/map-gallery/globalinternet-map-2012/index.html.

(see PDF version for the Figure).

 

But there is also a reason for why we often consider the internet a distributed network. Operating ‘on top’ of the physical layer, the “protocological layer” creates a network of equal nodes and bi-directional communication flows. In this layer, the rules are defined according to which data is wrapped and transmitted by the physical layer. The internet’s TCP/IP Protocol Suite logically assigns equal weight to all hubs and nodes (Cowley, 2012; Galloway, 2004). According to its predefined rules, IXPs have to route data but are not allowed to wield power over information flows. The protocols’ universal rules count equally for all network participants communicating through the infrastructure. To a potential surveiller, this distributed network is a thorn in the eye, because surveilling all flows here is very complex. For this reason, counter-surveillance technologies operate on and strengthen the protocological layer. Through encrypting data flows end-to-end, they make the decentralized physical structure dysfunctional for surveillance. Data still flows through the physical infrastructure and passes global hubs, but through encryption, communication is established peer-to-peer only. If someone intercepts the hubs, they cannot get any information usable for surveillance, because they cannot read the data. The Tor network does a similar thing (Tor Project, 2014); it hooks up to the regular internet infrastructure and allows user to access the internet. But by encrypting meta-data, surveillance of internet activities becomes impossible. In this way, encryption technologies have the power to strengthen the distributed features of the protocological layer and circumvent the decentralized physical one.

Conclusions

The results of my analysis show how the operation of surveillance and counter-surveillance technologies exploit different socio-political dimensions inherent to the internet infrastructure. Network diagrams helped me to describe these different dimensions and demonstrate how the two antagonists are engaged in a struggle over the network’s (dominant) structure and particular socio-technical organization. NSA surveillance technologies aim at establishing a centralized network in which the agency provides the central hub and oversees all information flows. Counter-surveillance technologies aim at establishing a distributed network where all nodes have equal rights and no one host has centralized control. This techno-political struggle is carried out within the infrastructure itself and through technological means. Within a Dewian account of politics, surveillance and counter-surveillance technologies then operate as a form of techno-politics, because they organize the channels of human interaction and strive to systematically regulate structures of interactions and communications through technologies.

However, Dewey still thought infrastructures to be extrinsic to political forms. In the case of governmental internet surveillance, we now see they become intrinsic, as infrastructures are employed for political purposes. In such techno-politics, political solutions are not negotiated through public discourse but through the application and operation of technologies. The people implied in the global network are affected by these techno-politics, because they structure their interactions in the network. But when political struggles are carried out on infrastructural levels that are transparent to users by their very definition, people remain unaware of these ongoing political developments. The problem this poses to democracy is further intensified by the network’s deterritorializing forces, which allow national agencies to access global hubs and wield power over a global public, while representing only a single nation state in whose interest they (supposedly) act. If technological solutions are provided to political problems, and if these solutions are applied on infrastructural levels that are transparent and invisible, then regular internet users and citizens are left unaware of political processes and cannot participate. Instead, it is technological elites who negotiate political decisions.

Acknowledgments

This paper is the result of my Master’s graduation project in Philosophy of Science, Technology and Society, offered at the University of Twente in the Netherlands. At this point I would like to offer my special thanks to my first supervisor, Dr. Michael Nagenborg, who was very enthusiastic about my project from the beginning on and provided me with the right starting points and a great introduction to Surveillance Studies. I would also like to express my great appreciation to my second supervisor, Prof. Peter-Paul Verbeek, who gave me feedback during the writing process and great support throughout the whole program. Finally, I wish to acknowledge all the people who make this outstanding Master’s program possible, my fellow students with whom I had such great discussions, and my family and friends for always supporting me.

References and Notes

Cumbers, A.; Routledge, P.; Nativel, C. The entangled geographies of global justice networks. Progress in Human Geography 2008, 32(2), 183-201.

Cowley, C. Communications and Networking, 2nd ed.; Springer-Verlag: London, United Kingdom, 2012.

Dewey, J. The Public and its Problems; Swallow Press/Ohio University Press: Athens, OH, United States, 1927.

Galloway, A. Protocol: How Control Exists after Decentralization; The MIT Press: Cambridge, MA, United States, 2004.

Hickman, L. Philosophical Tools for a Technological Culture; Indiana University Press: Bloomington, IN, United States, 2001.

Klein, M. Spying on the home front. Interview with H. Smith, Interviewer, 2007, May 15. Retrieved from http://www.pbs.org/wgbh/pages/frontline/homefront/interviews/klein.html

Spiegel Online. NSA-Dokumente: So übernimmt der Geheimdienst fremde Rechner; Published 2013, December 12. Retrieved from http://www.spiegel.de/fotostrecke/nsa-dokumente-so-uebernimmt-dergeheimdienst-fremde-rechner-fotostrecke-105329-8.html

Star, S. L.; Bowker, G. C. How to infrastructure. Handbook of New Media 2006, 230-245.

TeleGeography. Submarine Cable Map 2014; 2014. Retrieved from http://www.telegeography.com/telecom-resources/map-gallery/submarinecable-map-2014/index.html

The Guardian. NSA Prism program slides. Published 2013, November 1. Retrieved from http://www.theguardian.com/world/interactive/2013/nov/01/prism-slidesnsa-document

Tor Project. Tor: Overview. Project website, 2014. Retrieved from https://www.torproject.org/about/overview.html.en

  • Open access
  • 84 Reads
Basic law of information: the fundamental theory of generalized bilingual processing

Introduction

This article aims to popularly introduce basic law of information - -fundamental theory of generalized bilingual processing.

Bilingual can be divided into three categories: narrow bilingual, such as Chinese and English; alternative bilingual, such as terms and sayings; generalized bilingual, such as mathematical language (arithmetic figures for example) and natural language (Chinese characters for example). They all belong to the generalized text in board sense.1-2

Basic Law of information contains: A, existence of the real basic information as an axiom; B, law of human-computer interaction; and C, law of interpersonal communication.

The core problem is how to resolve ambiguity in translation and machine translation, which is the focus of this article. 3-5

Methods

Two types of formal strategy on generalized bilingual information processing:

Firstly, inheriting software engineering strategy as nature language understanding, knowledge representation, and pattern recognition;6-8

Secondly, creating systematic engineering strategy as generalized bilingualism, knowledge ontology and bilingual programming.

The following highlights three operable basic steps and their three supporting models as well as theoretical basis, involving two types of instances penetrating macro and micro.

Step 1 and Model 1:

The butterfly model refined by the author is developed on the basis of the research results of Weaver and Vauquois 9-10:

The predecessors envisaged an intermediate language in statistical machine translation and rules-based machine translation, but actually it does not exist. It is more appropriate to assume that one pair of a series of bilingual pairs as "an intermediate language" and thus the key is the construction of bilingual pairs.

Step 2 and Model 2:

The knowledge and common sense ontology model refined by the author:

Through the combination of seven characters and a tetrahedron, it depicts a blueprint for top-level design of the entire human knowledge--the most basic conceptual framework and the most concise method system. In this way, it sets up a bridge of qualitative analysis between interdisciplinary, cross-field and cross-industry knowledge subdivision system.

Step 3 and Model 3:

The three types of bilingual information processing system (synergy model) constructed by the author:

It goes beyond Saussure’s image of language system as Chess and Wittgenstein’s figure of speech as language game and thus can be called super Chess (super cloud) and large-span language game (specific cloud).11-12

In this case, the rules of chess are the real basic information that control:the chess manual, chess idea as well as the chessboard and chess pieces equals to the language, the meaning and the physical images respectively, corresponding to "language, knowledge, software" known as the phenomenon of three types of information. The author’s model taking chess as an analogy achieved the same result by different methods with Wittgenstein's language, thought, world;Husserl and Heidegger's inter-subjectivity, subjectivity, the former subjectivity;Popper's three worlds; and traditional philosophical methodology, epistemology, ontology.13-15

Results and Discussion

The combination of standardization and individuality, pluralism as well as diversity achieves the best human- computer interaction results.

Information Basic Law A: sequence-position relationship, the only conservation;

Information Basic Law B: Equivalent (According to same sequence-position), Parallel; Corresponding, Conversion.

Information Basic Law C: Synonymous (Agreed with each other), Parallel; Corresponding, Conversion.

Model 1 (to explain first and then translate) and model 2 (understand terms and familiar with sayings) follow the information basic law C, contributing to upgrading language ability and deep-processing knowledge issues.

Mode 3 (super cloud, specific cloud) follows information basic law B and information basic law A, contributing to machine translation quality issues.

The advantage of generalized bilingual information processing method lies in achieving reasonable division, complementary advantages, high collaboration and optimized interaction between three types of bilingualism.

Figure 1. Model 1 (to explain first and then translate) the key is the construction of bilingual pairs.

(see PDF version for the Figure).

 

Figure 2. model 2 knowledge and common sense ontology: the most basic conceptual framework.

(see PDF version for the Figure).

 

Figure 3. model 2 (understand terms and familiar with sayings).

(see PDF version for the Figure).

 

Table 1. Mode 3 (super cloud, specific cloud) follows information basic law A and B.

(see PDF version for the Table).

 

Conclusions

Its significance is that Turing’s "computability" theme and Searle’s "Chinese room" theme can be considered as two special cases of Xiaohui’s "bilingual chessboard" theme, thus highlighting the information basic law and its practical value.16-19

Its significance can be further described as follows:

Theoretically broaden the mind:

It is compatible with the convergence of formal information theory and the openness of semantic information theory 20-21.

The former is characterized by formal and computable; the latter is characterized by diversity and complexity.

Practically play a role:

Generalized bilingual information processing method can exceed and lead the two factions’ points of views, namely strong AI and weak AI, solving natural language understanding problem and high-quality precision machine translation problem.

Three basic laws of information serve as the basis for collaborative translation of three types of bilingual; the realization of generalized bilingual information processing proves the existence of three types of bilingual collaborative translation mechanism since they are of mutual causal relationship.

Acknowledgments

Many thanks to UC Berkeley professor Searle and China University of Geosciences (Beijing) professor Zhifang LIU for their help us to do our research in the Sino-US Searle Research Center

Many thanks to East China Normal University professor Wenguo PAN and World Book Inc editor Jian LIU for their generous help to perfect the manuscript.

References and Notes

  1. Zou Xiaohui,Zou Shunpeng. A New Mission for Contemporary Chinese Universities: Cultural Inheritance and Innovation Based on Chinese Thinking and Bilingual Processing. Journal of Nanjing University of Science and Technology (Social Science), 2012, 25(5)
  2. Zou Xiaohui,Zou Shunpeng.TWO MAJOR CATEGORIES OF FORMAL STRATEGY. Computer Applications and Software.2013(9)
  3. Peter Kruse, Michael Stadler. Ambiguity in Mind and Nature: Multistable Cognitive Phenomena. Springer Series in Synergetics. Springer-Verlag Berlin and Heidelberg GmbH & Co. K .1995.
  4. Zou Xiaohui,Zou Shunpeng. A Brand-New Machine Translation Strategy. Sciencepaper Online. 2011(7)
  5. W.N.; Booth, D.A., eds. (1955). "Translation" (PDF)1949. Machine Translation of Languages. Cambridge, Massachusetts: MIT Press. pp. 15–23. Reproduced in: Locke
  6. Roger C. Schank. Conceptual dependency: A theory of natural language understanding. Cognitive Psychology.Volume 3, Issue 4, 1972.
  7. Feigenbaum, Edward; McCorduk, Pamela (1983). The Fifth Generation (1st ed.). Reading, MA: Addison-Wesley
  8. R. Gruber. A translation approach to portable ontologies. Knowledge Acquisition, 5(2):199-220, 1993.
  9. Weaver, Warren (1949). http://www.mt-archive.info/Weaver-1949.pdf
  10. Bernard Vauquois. A survey of formal grammars and algorithms for recognition and transformation in mechanical translation. IFIP Congress (2), 1968, p. 1114-1122
  11. (1916) Cours de linguistique générale, trans. W. Baskin, Course in General Linguistics, Glasgow: Fontana/Collins, 1977.
  12. Philosophical Investigations, translated by G.E.M. Anscombe (1953).
  13. Tractatus Logico- Philosophicus, translated by C.K. Ogden (1922).
  14. Scheff, T. A New Paradigm for Social Science, Paradigm Publishers, 2006
  15. Popper,Karl.Three Worlds. In the Tanner Lectures on Human Values, http://tannerlectures.utah.edu/_documents/a-to-z/p/popper80.pdf
  16. Turing A M. Computability and λ-Definability. Journal of Symbolic Logic, 1937, (04).
  17. M.Turing. Computing Machinery and Intelligence.MIND,1950.
  18. John R. Searle. Minds, Brains and Programs. Behavioral and Brain Sciences.3, 1980.
  19. John R. Searle.The Future of Philosophy. LAST CORRECTED: Oct. 1999. 
  20. E. Shannon: A mathematical theory of communication. Bell System Technical Journal, vol. 27
  21. Floridi, L.The Philosophy of Information, Oxford University Press. 2011
  • Open access
  • 90 Reads
Primary and Secondary Experience as a Foundations of Adaptive Information Systems

Introduction

Humanity evidences major social, technological, economic and cultural transformations producing a new kind of society: network society [1]. Such an environment is described as turbulent, and is more complex, with higher uncertainty and with more interdependence. In such contextual turbulent environment, technocratic bureaucracies, with its mechanical authoritarian control structure of the organisational form, cannot absorb or reduce such environmental turbulence. The absorption and reduction are necessary, opening the way to a viable human future [2].

Information systems can be understood as the “extension of meaning engagement practice through mediating and organising social interactions” [3]. Empirical evidence of such a proposition can be found in a recent massive-scale experiment on Facebook users in which the emotional state of the user changed accordingly to the amount of positive or negative content in their news feed placed without their acknowledgment [4]. Besides emotions, patterns of the information system use can configure cognition and behaviour of a user in the process of accomplishing work-related tasks [5].

If an information system consists of social, technological and informational components, which are not separate but interrelated [6], and the social component of such a system changes according to patterns of behaviour, whereas there is an inherent inseparability between the technical and the social [7], can we search for causality between those patterns and adaptiveness of the information technology?

Human and material agencies are shared building blocks of routines and technologies, but by being isolated, neither of them (human or material agencies) are important. Namely, what is essential is the moment when they become imbricated, i.e. interlocked in a particular sequence, and as a whole they produce, sustain, or change routines and technologies [8]. To observe this phenomenon and to find an answer to the aforementioned question, the particular sequence of the relationship between human and material agencies, the inherent inseparability between the technical and the social, and the complexity of real situations should be examined, rather than analysing separate aspects [9].

Science 2.0 as a socio-technical system

In a recent proposal for development of science based on socio-technical progress, the term Science 2.0 emerged as a new phenomenon of interrelated socio-technical interactions, claiming that socio-technical systems are best studied at scale, in the real world, by rigorous observation, carefully chosen interventions and ambitious data collections [10]. In such an environment, which is fruitful for critiquing, suggesting, sharing ideas and data, communication is the heart of science, the most powerful tool ever invented for correcting errors, building on colleagues’ work and fashioning new knowledge [11].

To understand technology in society, we have to treat it as an action system, where its subfunctions could be performed by humans or technical objects (human or material agencies) acting as subsystems. This allows us to transform the abstract action system into a socio-technical system by conceiving an object for every suitable acting function and by integrating them into the human acting or working relations [9]. Software to be run on such a socio-technical system must be able to sense, interpret and respond [12] to patterns of system behaviour that emerge according to internal system properties or reflections to the environment.

Secondary experience research

The aim of this research is to investigate and eventually enable the exchange and (re)use of scientific papers created on the universities in the Danube region with their wider external environment including public, private and non-governmental organisations. Scientists working at the universities publish scientific papers and get the papers’ reflection according to the usage of the outside environment including public, private and non-governmental organisations. The main research question is, can we build an artefact in form of an information system that supports such an exchange and reflection?

In a critical review of the literature related to university governance of knowledge transfer, institutionalisation of linkage between universities and industry is defined as a new phenomenon, underlying various forms of knowledge transfer activities, ranging from collaborative research projects involving universities and companies (e.g. research contracts), intellectual property rights and spin-offs, labour and student mobility, consultancy etc., as well as “soft” forms of knowledge transfer, such as attendance at conferences and creation of electronic networks. Universities’ governance of knowledge transfer applies only to research contracts, intellectual property rights and spin-offs, but most university knowledge is transferred via traditional channels such as personnel exchanges, publishing, consulting and conferences. However, these types of knowledge transfer activities have not been institutionalised and little attention has been paid to their management and governance [13]. We believe that an intervention into this area by designing an adaptive information system could extend the capabilities of human and material agencies.

As a starting point for the conceptualisation of our research, we use primary and secondary experience proposed by John Dewey. The primary experience is the one with “minimum of incidental reflection”, while secondary experience is described as “what is experienced in consequence of continued and regulated reflective inquiry... experienced only because of the intervention of systematic thinking”. Dewey contrasted two different kinds of experience, primary and secondary, proposing that objects in secondary experience “get the meaning contained in a whole system of related objects; they are rendered continuous with the rest of nature and take on the import of the things they are now seen to be continuous with” [14].

In our view, the primary object in designing an information system is the one in which the observed object is excluded from the context with other objects, while secondary objects are those objects which are observed as a part of the higher-level system, consisting of the object itself and its relationships and behaviour in interaction with other related objects. Such a higher-level system includes an information system itself, but also its users and their information behaviour observed as a whole.

To design such an information system, we have to understand the information behaviour in socio-technical systems consisting of technologies that support the interaction between scientists, organisations they are working for, and published papers. The environment consists of public, private and non-governmental organisations. Those three sectors together with the academic actors create a Quad Model [15] or Quadruple Helix [16] creating a framework for EU Digital Agenda for Europe in which government, industry, academia and civil participants work together [17].

To do so we have to extend our research not only to the design of the information system, but also towards the information behaviour research in such a socio-technical system.

We have to research what type of information resource (e.g. abstract, full paper etc.), and what type of media (e.g. scientific journal, conference proceedings, web pages etc.), are being utilised, but also what are the patterns of information seeking behaviour in the process of accessing information resources. Those three research variables (type of information resources, type of communication channels and information seeking patterns) will provide us with insight into the phenomena of impact and usage of already published scientific papers by their environment (public, private and NGO). Such an insight is essential for the design of such an artefact, i.e. information system.

Another research inquiry is the area of interaction, or precisely speaking, what are the motivation drivers and factors that influence the interaction between scientists and their environment. If we understand the motivation drivers and factors that influence interaction, we can implement them into the design of an information system.

But we cannot know the effect of such functions in the information system, unless we incorporate them and put them into use.

Theoretical background of research

Main theoretical background of this research is in the Activity theory [18], describing the three-way relationship between a person (the subject), an object, to which an activity is directed, and the tools or instruments used in the activity. A further theoretical extension is in different models of information behaviour [19], which will provide us with the framework for collecting data about the usage of already published scientific papers existing in the area of interaction between universities, public, private and NGO organisations.

In our research we will use the Documents of Action concept [20], which gives us an analytical framework to analyse usage, interaction and co-operations around already published scientific papers. Another theoretical concept used in this research is Evolutionary Learning [21], suggesting that sustainability requires collaboration among governments, businesses and civil society. A clear distinction was made between growth, development and evolution, where growth is the increase in size or quantity, development is an amelioration of conditions or quality, and evolution is a tendency towards greater structural complexity and organisational simplicity, more efficient modes of operation and greater dynamic harmony.

Another theoretical contribution to this research is based on knowledge sharing communities [22] and communities of action [23] providing us with detailed frameworks for an information system functionality that supports both, social and technical, aspects of information behaviour. Also different social cybernetic concepts about self-organisation, self-reference, self-steering, autocatalysis and cross-catalysis and autopoiesis [24] will be used in researching feedback between the information system and its users. This proposed research also contributes to the discipline of information system design science [25-27] and contributes to the extended definition of the information system, seen as an artefact which consists of information, social and technical elements, creating a whole which is greater than the sum of its parts [6].

Conclusion

We evidence a trend of blurring the line between technological and social in information system research, moving the focus from deterministic to more casual logic in their design. Main aim of our research is to search for feedback from users’ socio-cognitive behaviour that could be used as a signal that triggers information system adaption. One of the theoretical fields we are currently exploring is information behaviour, which results in patterns of that behaviour. As the dynamic of patterns is observable by a machine, we believe that there is a possibility to use this signal to automatically (or semi-automatically) trigger restructuring of the information system, to generate new functions to support existing and create new information system goals.

We perceive an information system as a system consisting of informational, social and technological components acting as a whole, and that is aligned with findings from related theoretical and empirical studies presented in this paper. Those components interact between each other and such an interaction, which is not only deterministic but casual, could provide fundaments for adaptive information systems which evolve along their usage. Researching interaction around scientific papers by universities, public, private and non-governmental organisations could provide us with valuable information on where and how to intervene in such a system.

Building an artefact in form of an information system for the purpose of the research could provide us with empirical insights which of the interventions and interactions give optimal results in terms of information system performance.

For example, if we knew what types of documents have the most impact on the environment and trigger the cognitive, communicative and co-operation processes [28] (with public, private, non-governmental organisations), we could further design amplification towards this area of the system which could then produce change in dynamics of information behaviour and related patterns. New patterns will open up new areas of research interests, which then again could be amplified or attenuated. In that way we could design feedback loops in the information system which could enable deterministic but also casual properties.

References and Notes

  1. Castells, M. The rise of the network society: The information age: Economy, society, and culture, 2nd ed.; Volume 1; John Wiley & Sons: Hoboken (NJ), USA, 2011; p. 17.
  2. Trist, E.L.; The evolution of socio-technical systems; Quality of Working Life Centre: Toronto, Canada; 1981; p. 39.
  3. Aakhus, M.; Agerfalk, P.J.; Lyytinen, K.; Te'eni, D. Symbolic Action Research in Information Systems: Introduction to the Special Issue. MIS Quarterly 2014, 38(4), 1187-1200.
  4. Kramer, A.D.; Guillory, J.E.; Hancock, J.T. Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America 2014, 111(24), 8788–8790.
  5. Ortiz de Guinea, A.; Webster, J. An Investigation of Information Systems Use Patterns: Technological Events as Triggers, the Effect of Time, and Consequences for Performance. MIS Quarterly 2013, 37(4), 1165-1188.
  6. Lee, A.S.; Thomas, M.A.; Baskerville, R.L. Going back to basics in design: From the IT artifact to the IS artifact. Proceedings of the Nineteenth Americas Conference on Information Systems 2013, 1757-1763.
  7. Orlikowski, W.J.; Scott, S.V. Sociomateriality: challenging the separation of technology, work and organization. The academy of management annals 2008, 2(1), 433-474.
  8. Leonardi, P.M. When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies. MIS Quarterly 2011, 35(1), 147-167.
  9. Ropohl, G. Philosophy of Socio-Technical Systems. Techné: Research in Philosophy and Technology 1999, 4(3), 186-194.
  10. Shneiderman, B. Computer science. Science 2.0. Science 2008, 319(5868), 1349-1350.
  11. Waldrop, M.M. Science 2.0. Scientific American 2008, 298(5), 68-73.
  12. Stock, G. Metaman: The merging of humans and machines into a global superorganism. New York: Simon and Schuster, recited from; Hofkirchner, W. (2007). A critical social systems view of the internet. Philosophy of the Social Sciences 1993, 37(4), 471-500.
  13. Geuna, A., Muscio, A. The governance of university knowledge transfer: A critical review of the literature. Minerva 2009, 47(1), 93-114.
  14. Dewey, J. Experience and Nature, George Allen & Unwin, Ltd: London, UK, 1929.
  15. Wilson III, E. J. How to Make a Region Innovative. strategy+ business 2012, 66, available at: http://www.strategy-business.com/article/12103?pg=all
  16. Carayannis, E.G; Campbell, D.F.J. “Mode 3” and “Quadruple Helix”: toward a 21st century fractal innovation ecosystem. International Journal of Technology Management 2009, 46(3/4), 201-234.
  17. EU. Open Innovation 2.0, available at: http://ec.europa.eu/digital-agenda/en/open-innovation-20
  18. Wilson, T.D. (Ed.). Theory in information behaviour research, Chapter 1-Activity theory, Eiconics Ltd: Sheffield, UK, 2013.
  19. Wilson, T. D. Models in information behaviour research. Journal of documentation 1999, 55(3), 249-270.
  20. Zacklad, M. Documents for action (Dofa): infrastructures for Distributed collective Practices. Actes du workshop “Distributed Collective Practice: Building new Directions for Infrastructural Studies CSCW” 2004, p. 6.
  21. Laszlo, K.C.; Laszlo, A. Fostering a sustainable learning society through knowledge-based development. Systems Research and Behavioral Science 2007, 24(5), 493-503.
  22. Nousala, S.; Miles, A.; Kilpatrick, B.; Hall, W.P. Building knowledge sharing communities using team expertise access maps (TEAM). Proceedings of the KMAP05 Knowledge Management in Asia Pacific 2005, p. 30.
  23. Zacklad, M. Communities of action: a cognitive and social approach to the design of CSCW systems. Proceedings of the 2003 international ACM SIGGROUP conference on supporting group work 2003, 190-197.
  24. Geyer, F. The challenge of Sociocybernetics. "Challenges to Sociological Knowledge", Session 04: "Challenges from Other Disciplines", 13th World Congress of Sociology, 1994, available at: http://www.critcrim.org/redfeather/chaos/006challenges.html
  25. Hevner, A.R.; March, S.T.; Park, J.; Ram, S. Design science in information systems research. MIS Quarterly 2004, 28(1), 75-105.
  26. March, S.T.; Smith, G.F. Design and natural science research on information technology. Decision support systems 1995, 15(4), 251-266.
  27. Järvinen, P. Action research is similar to design science. Quality & Quantity 2007, 41(1), 37-54.
  28. Hofkirchner, W. How to Achieve a Unified Theory of Information. TripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 2009, 7(2), 357-368.
  • Open access
  • 93 Reads
Librarians and University Presses of the World Unite!: Efforts to Resuscitate the Knowledge Commons of Academic Publishing

Introduction

As is relatively well known, Henry Oldenburg, the first Joint Secretary of the then newly founded Royal Society of London, published in 1665 the world’s first scholarly journal, Philosophical Transactions. Oldenburg established the journal to fulfill four functions that continue today as part of the scholarly communication process: registration, which ensures the article is connected to the author as well as the intellectual property right holder; certification of the quality of the research through peer review; dissemination of the research; and archiving to ensure historical preservation and future availability of research. The scholarly publishing system remained confined largely to learned societies for roughly the following three centuries, until commercial publishers began to recognize and exploit the profit potential of academic literature. Sustained annual growth in the number of journals and articles, accompanied by aggressive merger and acquisition activity among the major publishing conglomerates, has resulted in a contemporary multi-billion dollar industry dominated by a handful of publishing behemoths that extract immense resources from institutions of higher education based on the free labour of academics.

And while the largest of the commercial publishers that now dominate the academic publishing ecosystem declare that they are vital to ensuring the effective discharge of the key aspects of the scholarly communication system, the proposed paper will suggest that their command and extreme rent-seeking behavior is parasitic on scholarly communication and ultimately stymies the system. Moreover, and this is the more novel element of the argument to be developed, this stranglehold is superfluous and could be loosened in a way that would restore substantial control over the academic journal publishing system to stakeholders and institutions more closely aligned with the interests of the actual producers and users of scholarly works. Considered in broad brush strokes, the paper will argue that contemporary information and communication techologies, in tandem with open source digital publishing management platforms, have made it technologically, logistically, and financially feasible for scholarly journal publishing to be reclaimed by members of the academy through their non-profit university presses and libraries in ways that would respond to the serials crisis experienced by academic libraries while ensuring fidelity to the traditional processes of the scholarly communication system.

A General Overview of the Western Academic Publishing Industry

According to the most recent data collected by the consulting firm Outsell, revenues in 2011 for the science-publishing industry amounted to US$9.4 billion. Based on 1.8 million English-language articles published annually by 3,500 publishers in 27,000 journals, this figure translates into gross revenue of slightly more than US$5,200 per article [as cited in 1].i   The academic publishing industry is now dominated by ten major corporations. The top three publishers of scientific journals (Elsevier, Springer, and Wiley-Blackwell) account for approximately 42 percent of all articles published. In part, this concentrated degree of control has been made possible because these large commercial publishers have been very successful in acquiring many of the most prestigious and high-circulation journals across almost all academic disciplines.

In an attempt to justify the high rents they extract when selling access to the knowledge created by academic labourers, publishers typically invoke claims about adding value to the broader knowledge ecology. Such assertions completely sidestep the reality that unpaid academic labour provides the content, peer-review, and editorial work (although a few publishers pay editors a small stipend, it is typically well below the true value of the person’s efforts) being appropriated by journal publishers. These types of claims also occlude the additional time and money burdens typically downloaded onto authors should their manuscript contain colour material, or require copyright release for images and other copyrighted material they might want to incorporate into their work. And even this value-added work is appropriated by publishers who coerce authors into surrendering their intellectual property rights as a precondition for publication.

Open Access Responses

In response to several of the trends in the academic publishing industry that have clearly disadvantaged both authors and libraries – that is, the producers and the purchasers of scholarly output – a sustained movement has emerged over the last decade and a half that advocates for and develops open-access models to academic research. The two dominant accepted models for delivering open access to scholarly works are known as ‘Gold’ and ‘Green’ open access. Gold open access refers to peer-reviewed publication in an open-access journal, whereas Green open access involves deposit of the work in an institutional or subject electronic repository. Beyond the mounting success of the Green model that relies on repositories, recent research provides additional evidence that open-access journal publishing has matured into a sustainable form of scholarly publication [2]. As might also be expected given such monumental growth, open-access infrastructure and technical applications have advanced considerably. In particular, Open Journal Systems, a journal management and publishing system developed by the Public Knowledge Project, has become a widely used software platform by over 5,800 open-access journals.

Unfortunately, the corporate publishing oligarchs have also recognized the capital accumulation opportunities offered by open-access models. All of the leading corporate academic publishers now offer Green and Gold open-access options at a range of different, and exorbitant, price points (e.g., depending on the publisher and the journal, Green open-access options can be purchased for US$1,500 and Gold open access tends to run between US$2,000 and US$3,000). The fact that there is such a range of article processing charges indicates that they are less a reflection of actual production costs and instead based more on a calculus of what the market will bear. Clearly, open access per se will not necessarily improve the long-term financial sustainability of the scholarly communication system. Instead, these corporate adaptations to open access represent a direct response by commercial publishers to subvert the open-access model in service of their own accumulation imperatives. Indeed, according to analysts at Bernstein Research, which upgraded its stock outlook for Reed Elsevier in September 2014 to market-perform from underperform in 2011, open access has done little to challenge the market strength of the leading subscription publishers. Instead, these analysts suggest that open-access funding models may actually be contributing to the profits of scientific, technical, and medical (STM) journal publishers. This assessment has certainly been borne out in the case of Elsevier, which has been steadily increasing its operating profit margins over the last few years for its STM journals: 36 percent (£724 million) in 2010, 37 percent (£768 million) in 2011, 38 percent (£780 million) in 2012, and 39 percent (£826 million) in 2013. These same analysts predict further consolidation of the industry, which would favour the larger players such as Elsevier [3].

Toward a More Substantive Transformation of the Scholarly Publishing System

All academics, but especially tenured faculty, need to be reminded of their role in the broader knowledge ecology and the constraining effects that the current commercial model of journal publishing exercises on this ecology. At the risk of stating the obvious, this is critical since academics benefit from their work being widely disseminated and used (and hopefully cited), not from royalty streams. Put more directly, there is a disconnect between the factors motivating the typical academic writer and the profit maximising behaviour of commercial publishers. Academics provide the majority of labour that sustains the production of scholarly knowledge, including the actual research and writing, peer review, and editing. It is time for academics to re-appropriate from for-profit publishers the products and processes of our collective labour in order to revitalise the knowledge commons in ways that serve the public good rather than commercial accumulation imperatives. And although this might require significant amounts of persuasion among some of our more conservative colleagues, I suggest that logistically such a re-appropriation would be less difficult.

There already exists a basic publishing infrastructure in the form of non-profit university presses, which should be able to substitute easily for commercial publishers in ways that would not require the assignment of copyright by authors or the imposition of onerous pricing and licensing contracts on library customers. Indeed, university presses have substantial historical experience in facilitating the dissemination of scholarly research across multiple product lines (trade books, scholarly monographs, textbooks, and journals). And, as pointed out above, there exist freely available, technologically sophisticated digital publishing platforms (e.g., Open Journal Systems) of which university presses could avail themselves. I therefore believe that university presses are best positioned to fulfill the key aspects of the scholarly communication system in ways that would promote access while also remedying the fiscal instability of the current corporate-dominated model.

Moreover, and this is the part of the paper that will be further elaborated for the conference, university presses can partner with the increasing number of university libraries that have coalesced through the Library Publishing Coalition project. This group of 60 North American academic libraries has committed itself to expanding the nascent field of library publishing in collaboration with university presses and learned societies. The findings reported in this part of the paper will outline some of the lessons and best practices learned so far by this group of librarians who are actively seeking to collaboratively transform academic publishing into a commonist endeavour better aligned with the production processes and consumption practices of the actual producers and consumers of academic research.

Conclusions

Commercial control of academic publishing through strategies and practices such as industry consolidation and forced assignment of copyright represents an appropriation and enclosure of the knowledge commons that otherwise would emerge from the unrestricted flow of academic research. Put another way, commercial control of academic publishing expedites the private expropriation of much of the value that is produced in common through the cooperative relationships inherent in scholarly production. Yet such appropriation and enclosure need not be tolerated. The success of the open-access movement and models has demonstrated that there are viable alternatives to the commercial control of academic publishing. However, the dominant open-access regime suffers from inherent neutrality in respect of economic model that renders it susceptible to commercial appropriation and exploitation. The author-pay model does nothing to destruct the commodity logic of academic publishing but instead merely transfers the revenue source from users/readers to the actual producers (authors), which introduces yet another level of exploitation of the producers. Thus, while sympathetic to the goals and objectives of (Gold) open access, I assert that the more formidable imbalance in the scholarly publishing system is the presence and substantial control exercised by for-profit publishers. I therefore believe that we need to become more radical in our thinking and our actions in order to wrest control of academic publishing from the current commercial publishing oligarchs. A group of academic librarians, through their institutions, has begun to try and actively re-appropriate such control. The review and assessment of selected examples of collaborative publishing projects being undertaken by some university presses and academic libraries shines an optimistic light on efforts to exert autonomous self-control over our knowledge commons.

References and Notes

  1. Van Noorden R. The true cost of science publishing: Cheap open-access journals raise questions about the value publishers add for their money. Nature 2013; 495: 426-9.
  2. Laakso M, Welling P, Bukvova H, Nyman L, Björk B-C, Hedlund T. The development of open access journal publishing from 1993 to 2009. PLoS ONE [serial on the Internet]. 2011; 6(6): Available from: http://dx.doi.org/10.1371%2Fjournal.pone.0020961.
  3. Aspesi C, Luong H. Reed Elsevier: Goodbye to Berlin - The fading threat of open access (upgrade to market-perform). New York: Sanford C. Bernstein & Co., LLC, 2014.

 

i Information about the current make-up of the academic publishing industry, including aggregate revenue figures, is surprisingly difficult to locate. Most authors tend to draw on data collected by two private consulting firms, Outsell and Simba Information. However, the price tag of the report compiled by Outsell is $1,850. The price of Simba’s report, $3,250, is even more prohibitive. Multiple efforts by the author to secure a copy of either report through interlibrary loan failed.

  • Open access
  • 44 Reads
Homo Informaticus and Information Society - Some Critical Comments

In this presentation some technological and social trends of contemporary society are assessed and evaluated with respect to their effects on human behaviour and their humane potential. It seems useful to take a long-term perspective on these issues and compare the presence with the past phases of capitalism.

In his famous book “The Great Transformation” Karl Polanyi has shown how in the first half of the 19th century market economy in Great Britain grew into full stature. His explanation of long-term changes includes economic aspects of class interest but also points beyond them. He argues that class interest is heavily related to its standing and rank, to status and security, which is primary social. Nevertheless, the term “class” is always based on economic characteristics. Its definition depends on the different ways surplus is produced and appropriated in society. Similarly Polanyi did not give technology an essential role for shaping society. “Social not technical invention was the intellectual mainspring of the Industrial Revolution” (p. 119). This assessment seems true for the type of technological change for a certain period of time, for the age of mechanization, which was the technical backbone of Industrial Revolution. In the second half of the 19th century Marx characterized the mechanical machinery in the following way: “All fully developed machinery consists of three essentially different parts, the motor mechanism, the transmitting mechanism, and finally the tool or working machine.“ (Capital, Volume I, Chapter 15). Over the centuries some parts of the mechanical machinery were fundamentally changed. New principles of energy transformation were applied. The motor mechanism, first a steam engine, was replaced by electro-mechanical drivers, by the combustion engine and by the gas turbine. Nevertheless the basic structure of mechanical machinery survived (see fig. 1).

One of the most important effects of technology on human beings is the relocation of specific human activities to artefacts. The machine-tool deprived (and also relieved) the worker of the individual handling of the object of work and of the controlling of the tool. At the same moment the worker was replaced by the motor mechanism as the source of mechanical energy. To quote Marx: “No longer does the worker insert a modified natural thing [Naturgegenstand] as middle link between the object [Objekt] and himself; rather, he inserts the process of nature, transformed into an industrial process, as a means between himself and inorganic nature, mastering it. He steps to the side of the production process instead of being its chief actor.” (https://www.marxists.org/archive/marx/works/1857/grundrisse/ch14.htm). It seems evident that with this side step human beings are less burdened by mechanical activities, but technical innovation per se did not change the relationships of production. Exploitation and alienation were not the direct result of technology but an effect of the restructuring of the social fabric.

When it was clear that the replacement of human labour by mechanical devices was more or less completed, the focus of technical innovation became mental work. Actually, in the middle of the 20th century, a new type of machinery emerged, replacing further elements of human labour. The “Information Processing Machine” (IPM) was born (see fig. 1). From this innovation information society took its point of departure. It allowed already transforming human perception, human decision-making (even under changing conditions) and human intervention into functions of the new technology. Human senses can now be replaced by microphones, video-cameras, thermometers, keyboards and touch-pads etc., decision making can be done by electronic devices (first electro-mechanical relays, followed by radio valves, transistors and microprocessors), which are shrinking day by day, and actors like (mechanic and electronic) switches, relays, printers, video-screens etc. allow to communicate the decisions of the machinery to the outside world (see fig. 1). The still ongoing process of automation consists of the combination of mechanical machinery with information processing systems. The latter monitors and controls the former according to computer programs. By elimination of live labour the productivity of the remaining one is boosted towards new highs. Human beings are no longer needed for those activities of the production process, which were their monopoly before.

Automation is only one of the applications of IPM. It can easily stand on its own (as mainframe computer, as personal or laptop computer or as microprocessor in smart phones), and it can be used within an electronic network. Examples are the Internet and mobile phones.

Figure 1. Automated machinery = mechanical machinery + information processing machinery

(see PDF version for the Figure) .

 

To analyse the effects of the large-scale diffusion of the IPM in its various kinds of application we have to separate the space of society into different fields. Here we focus only on the economic, social, political, and psychological spheres.

To start with economic effects we observe a tremendous reduction of all kinds of communications and transaction costs (Fleissner 1995), as well as an increase in the productivity of office work and many kinds of creative activities. The direct output of the IPM is the information good. It reaches from texts, music, pictures, videos to various kinds of software. Although information goods are non-rival ones, the capitalist system could not resist to limit artificially their use and their global availability. Commodification of the information good is performed in a dual way: The first is done by technical means, by copy protection mechanisms, the second is performed by Law. Intellectual property rights together with the increased difficulty to copy information goods allow the emergence of markets for information goods: The appropriation of profits became possible. In combination with all the electronic devices to retrieve and store information in digital form a fully-fledged industry was born from scratch. In addition to that digital communication offered a new world-wide market for services by providing more or less smart mobile phones and other supporting electronic devices with high profit rates.

Parallel to the general shrinking of the family size in the developed world down to the limit of the single household the need for social infrastructure increased considerably. Social and private security and health care systems, electronic taxation systems, accounting systems for the consumption of water, electricity or natural gas are linked electronically to the individual. They would no longer be possible without the application of computers and networks. On the other hand thousands of jobs were lost.

On the social level we can see that the behaviour of the younger people is heavily influenced by the IPM in its networked form. The exchange of information with the help of high-tech multi-functional devices which are able to record voices, take pictures, to store them and to pass them over to their friends has become one of the most important activities of children and even grown-ups. As we know now all these data is transferred to large-scale institutions of surveillance. It seems to be real fun, but the author cannot be helped to think that all the digital gimmicks are a kind of a distraction from more serious issues. The culture of exchanging selfies is booming. This corresponds very well with the methodological individualism we can find in mainstream micro-economics where the individual entrepreneur is in the centre of the game. The mass media are echoing and supporting this tendency: Casting shows are strengthening self-control of the individual to be adapted according to the demands of the media and the needs of enterprises. Advertising and marketing campaigns once more focus on the individual, not on the community. Slogans like “Geiz ist geil” (“tight is right”), “Ich habe nichts zu verschenken” (“I don't give anything away for nothing”) or “einer hat es, einer wills” (“one owns it, the other wants it” … and takes it in the advertisment) underline selfish behaviour. Hedonism flourishes. Also the behaviour during leisure time has been changed. Personal contacts are permanently interrupted by emails or other messages on the smart phone. Permanently being online and available for others ruins any contemplation and thoughtful concentration. Virtual realities offer seductive places for entertainment. There is a shift away from longer term planning of meetings towards more spontaneous forms. But also mobbing has increased by using Facebook and other social media.

Although there were high expectations in the early stages of the new media with respect to increased democracy, awaiting a power shift in favour of the lower strata of hierarchies, we learned that only a very small minority is really using the digital machinery for political purposes. On the contrary: The majority uses smart phones and the Internet for personal exchange of information. Electronic shopping and financial transactions are another popular activity, with an impact of the distribution of shops in the real world towards large-scale suppliers like Amazon.

The author has some reason to argue that principles, structures and processes, where many individuals are involved practically, continuously or frequently will shape individual values and individual behaviour of the people. We observe a spread of egotism and egocentricity. Community-based forms of production, distribution and living are disrupted. Solidarity and mutual help have come under pressure. What is the reason of this trend? One of the main roots of spreading selfishness seems to lie in the basic structure of our economy, the legal protection of private property in any form combined with the exploitation of alien labour. This does not mean that rationality – frequently seen as the central feature of homo informaticus – has to be given up. It still depends on the content and the goals of rational thinking.

Today it becomes necessary to look for fresh ways of cooperation, solidarity and mutual help to assure a decent life for everybody and to gain back the control of the economy for the common good.

References

  1. Fleissner, P. Max Webers Bürokratietheorie im Lichte elektronischer Kommunikationsmedien (Max Weber's Theory on Bureaucracy in the Light of the Media of Electronic Communication). In GISI 95 Herausforderungen eines globalen Informationsverbundes für die Informatik F. Huber-Wäschle, H. Schauer und P. Widmayer (Eds.). Springer: Berlin, Germany, 1995; pp. 127-135.
  2. Marx, K. Capital, Volume I, Chapter 15, 1867. https://www.marxists.org/archive/marx/works/1867-c1/ch15.htm#S1
  3. Marx, K. Grundrisse der Kritik der politischen Ökonomie, 1857/1858. https://www.marxists.org/archive/marx/works/1857/grundrisse/ch14.htm
  • Open access
  • 75 Reads
A Fresh Look: The Social Competence of Information Science

1. Introduction. Three Crossroads

In today’s world, trends toward improvement in the quality of life are offset by a regression and degradation of the mental and social environment, both in part due to the massive role of information in the society. As at any crossroads, one has the possibility of going forwards or backwards. At this Socio-Political Crossroads, it is necessary to understand the way information operates to get on the ‘right’ road.

The science and philosophy of information as disciplines are also at another, closely related crossroads: they may develop in the direction of integration in an Informational Turn, a new way of Informational Thinking as proposed by Wu Kun [1] that can support efforts toward a Global Sustainable Information Society, in the term of Wolfgang Hofkirchner [2]. Alternatively, they may diverge or regress in the direction of increasingly socially irresponsible specialization and scholasticism. This is, then, a Transdisciplinary Crossroads.      

A third crossroads, inseparable from the first two, involves the direction of development of the science and philosophy of information as metaphysics. It is a Metaphysical Crossroads that includes a definition of the dynamic relation of man to the universe. Like the other two, there is a positive branch (“Turning One’s Head” as Gerhard Luhn describes it [3]) leading toward less dysfunction at the individual and social level. The negative branch implies an on-going blockage of ethical development of the society.

In this paper, I discuss three aspects of information as they relate to a potential information commons. One is the political dimension and the potential commitment to some form of action in which practitioners of information science could be involved. The second is my dialectic logic in reality (Logic in Reality; LIR [4]) that in my view best describes the nature and evolution of information; and the third is the relation of that logic to the dialectic logic of Hegel in some of its current interpretations, as discussed by Fuchs [5]. In this view, Logic in Reality provides a link to science, hence its scientific support of initiatives for the common good. This paper is, accordingly, a response to the first question posed in ICT&S 16, namely, “What contradictions, conflicts, ambiguities and dialectics shape the 21st century information society?”

2. Dialectical Philosophy and Logic

2.1 The Problem of Logic – Again and Still

The current social, political and economic system, with its failures and lack of ethics is unfortunately supported, directly or indirectly, by the tenets of standard philosophy and in particular its logic. In many theories of society and economics, the underlying logic is essentially bivalent classical logic, a logic of “exclusion”, mirroring the absolute separation between premise and conclusion, set and member of set and the principle of exclusivity in standard category theory. The situation has scarcely evolved since 1936, when Norris stated [6] “…practical and technological problems simply cannot be solved by use of Aristotelian logic alone. This is not a logic of forward-looking or intentional activity whether practical or technological.”

For Jacques Ellul in the 1960’s [7], the term logic characterized primarily the dysfunction of the society, as in “the implacable logic of the market” that exacerbates the separation between the global networks, for example in Manuel Castell’s conception [8], of capital flows and the human experience of disenfranchised workers. Barinaga and Ramfelt [9], quoting Castells, state that one of the challenges of the society is that its very logic is based on an idealized, one-sided conception of society that excludes an important part of the world population. Any intellectual approach that weakens, deconstructs or discredits this ideology and proposes workable, socially acceptable alternatives is therefore to be welcomed.

In 2006, Christian Fuchs suggested [10] the need for a new functional “logic of self-organization” as a necessary feature for models to be able to deal with normative aspects of development, so that the “meaning” of the meaning of information is not ambiguous, but includes a moral dimension. At this Conference, Section ICT&S 3 on the Internet, Commodities and Capitalism will deal with the commodity logic of contemporary capitalism.

In a 2009 paper [11], I described further the essential components of my “logic of and in reality” (LIR), and showed that it had the capability of addressing and illuminating issues raised by Hofkirchner, Fuchs et al. in their evolutionary “Salzburg Approach”. LIR founds a logical approach to the evolution of both groups and individuals and their interaction, and to the negative as well as the positive aspects of current technological developments. LIR provides a new logical interpretation of key concepts in social theory including morality, cooperation and conflict, grounding them in physical reality and authorizing patterns of inference. The term “evolutionary” is discussed in terms of similarities and differences with biological evolution; LIR offers a logical explication and expansion of Fuchs’ statement that nature and society are both identical and non-identical.

2.2 The Relation to Hegel

Logic in Reality is both dialectical and transcendental in the sense of Hegel. It is dialectic in that the law of non-contradiction fails and transcendental in that it ‘straddles’ the opposition between subject and object [6]. Such a logic is close to an ontology, that is, it says something about the nature of things. Of course, the conceptual structure of reality that LIR offers includes information to which Hegel did not have access. None of it, however, is inconsistent with the principle of contrastive dialectics but rather reinforces it. As I have pointed out elsewhere, LIR supplements Hegel by adding a descending dialectic to Hegel’s ascending one and incorporates a necessary ground at the lowest physical level of reality. Logic in Reality makes it possible to enter the dialectic process from science itself, that is, the entities postulated and in part proven by science (other than to hard-core anti-realists), for example quantum physics, are compatible with a philosophical sublation and indeed isomorphous to it. Elements and their contradictions or oppositions follow the same pattern of evolution and emergence.

2.3 Žižek and Fuchs: A Fresh Look

The sociologist Slavoj Žižek is a devastating critic of the current late-capitalist politico-economic system and its “pseudo-natural logic”. Calling our society an Information Society is already an ideological statement, although not recognized as such, since it suggests degrees of freedom from capitalism that do not exist. In a major book, Living in the End Times [12], Žižek shows how this anti-humanist system is reflected in current art – literature and cinema – even in its ‘New Age’ form supposedly opposed to the current capitalist paradigm. One must reject the ideology at work in technology and the artificial solutions it proposes. However, “It is not enough to demand an ecological reorganization of capitalism, but neither will a return to a pre-modern organic society and its holistic wisdom work.” Žižek thus calls for a “fresh look” at the uniqueness of our situation, a concrete social analysis of the economic, political and ideological roots of our problems. A reconceptualization of dialectical logic is necessary to which Logic in Reality may contribute.

In the paper prepared for this Summit referring to Žižek, Fuchs [5] states that capitalist society operates in such a way as to maintain the continuity of capitalism as a system in the face of contradictions resulting from the discontinuities which are a consequence of the ICTs. In Lupascian terms, ‘energy’ needs to be added to permit a resolution of these contradictions at higher level of reality, in other words, convert the ICTs to an information commons, a non-capitalist information society. Logic in Reality, in my view, should be the preferred language to discuss complex interrelated contradictions and dialectics of dialectics, a term used by Lupasco [13].

3. What Has Happened to the Common Good?

The environments for human existence which can be considered as components of the common good are the following: 1) the informational environment, defined by the revolution in the information and communications technologies (ICTs); 2) the natural global environment which, apart from some local improvements, is undergoing massive and possibly irreversible degradation; 3) the local socio-economic environment in which individual human beings evolve.

In a recent book [14], whose title is that of this section, François Flahault shows that social reciprocity and coexistence are the essential requirements for a satisfactory individual life, defining the real, non-economic “common good”. However, the necessary codification of the rights of individuals, in the Universal Declaration of Human Rights in the aftermath of World War II, is now interpreted in a context of market-driven globalization of the ICTs, leading to a drastic and inhuman devaluation of the common good. What is new and problematical in the environments is not technology – science and engineering per se - but the ever-increasing space, material and mental, that is abusively occupied by the artifacts of technologies and their misdirection to individual selfish goals. Unless philosophers and logicians as well as scientists address these issues, they will have failed to address the reality of our world.

4. The Social Competence of Information Science

What is thus missing in the information science literature (and in the first part of this memorandum) is the social-political dimension, the social, economic and political context in which any application of a more ethical philosophy theory must be made. This paper may be thus considered as having aspects of a social critique, a ‘social philosophy’ in the sense of the neo-Marxist Franck Fischbach [15]. I see the entire Summit as social philosophy in this sense. It is an ethical reflection on an informational commons as a necessity for that commons, and it is at the same time a political reflection on the process of struggle to achieve it. Fischbach’s social philosophy does not separate the social from the political, sparing the effort to put them back together subsequently.

The complex entity constituted by the participants in this Summit and their transdisciplinary contributions confers a competence and a unique credibility on them. Practitioners of Information Science start with an advantage of being at the heart of the defining technology of our “Information Age”, and I suggest that this is recognized by every user of the technology, that is, everyone.

5. Conclusions

In the perspective outlined here, Logic in Reality is the thread that runs from the foundations of the nature of information in the physical structure of the world, through the informational characteristics of human beings to those of the society that defines the context for human existence as a social animal. LIR is therefore a new tool to use in the ‘struggle to learn how to struggle’. In the metaphilosophy of information of Wu Kun [16], transdisciplinary informational activities have as a direct consequence the weakening of centralized governments and political institutions and, correspondingly, a strengthening of a commons.

Finally, encouraged by Fuchs’s demonstration of the importance of Heraclitus’ version of dialectics, I close with a fragment that I feel is à propos to a Summit about an information commons: “Fragment 2: Accordingly, one ought to follow what is common, that is to say, what is universal. For the universal Word is common to all. …”

References

  1. Wu, K. 2010. The Basic Theory of Philosophy of Information. Paper, 4th International Conference on the Foundations of Information Science, August, 2010, Beijing.
  2. Hofkirchner, W. 2013. Emergent Information; A Unified Theory of Information Framework. World Scientific Publishing Company: Singapore.
  3. Luhn, G. and G. Hüther. Wende im Kopf. Radebeul, Göttingen: unpublished manuscript.
  4. Brenner, J. E. 2008. Logic in Reality. Springer: Dordrecht.
  5. Fuchs, C. 2014. The Dialectic: Not just the Absolute Recoil, but the World’s Living Fire that Extinguishes and Kindles Itself. Reflections on Slavoj Žižek’s Version of Dialectical Philosophy in Absolute Recoil: Towards a New Foundation of Dialectical Materialism. triple-C 12(2): 848–875.
  6. Norris, O. O. 1936. The Logic of Science and Technology. Philosophy of Science 3 (3), pp. 286-306.
  7. Lovekin, D. 1977. Jacques Ellul and the Logic of Technology. Man and World 10(3), 251-272.
  8. Castells, M. (2004). The Information Age: Economy, Society and Culture. Volume II The Power of Identity. Blackwell Publishing: Malden/Oxford/Carlton.
  9. Barinaga, E. and L. Ramfelt. Kista – The Two Sides of the Network Society. In: Networks and Communications Studies, 18 (3-4):225-244.
  10. Fuchs, C. 2006. The Dialectic of the Nature-Society System. Triple C, 4 (1), 1-39.
  11. Brenner, J. E. 2009. Prolegomenon to a Logic for the Informational Society. Triple C, 7(1), 38-73.
  12. Žižek, S. 2011. Living in the End Times. Verso: London.
  13. Lupasco, S. 1987. Le principe d’antagonisme et la logique de l’énergie. Editions du Rocher: Paris. (Originally published in Paris: Éditions Hermann, 1951).
  14. Flahault F. 2011. Où est passé le bien commun? Paris: Mille et Une Nuits.
  15. Fischbach, F. 2009. Manifeste pour une philosophie sociale. Éditions La Découverte: Paris.
  16. Wu K. and J. E. Brenner. The Informational Stance: Logic and Metaphysics. Part II From Physics toSociety. Logic and Logical Philosophy 23:81-108.
  • Open access
  • 164 Reads
Big Data - Towards a New Techno-Determinism?

Big data promises a multitude of innovative options to enhance decision-making by employing algorithmic power to gather worthy information out of unstructured data sets. Exploiting petabytes of data is framed as remedy to deal with complexity and reduce uncertainty by paving the way for predictive analytics. However, the increasing complexity of big data analysis fed with increasing automation may trigger not merely uncertain but also unintended societal events.

Big data is often defined as “high-volume,-velocity, -variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making“. This definition refers to the Gartner Group (2001) and not least mirrors the strong role IT-marketing plays in the big data discourse as it puts emphasis on presenting big data as novel form of information processing that efficiently enriches decision-making. Less mystifying, (boyd/Crawford 2012) define big data as “a cultural, technological, and scholarly phenomenon” that rests on the interplay of technology, analysis and mythology. The latter addresses the “widespread belief that large data sets offer a higher form of intelligence and knowledge to generate insights previously impossible with the aura of truth, objectivity and accuracy“ (boyd/Crawford 2012).

This dimension of mythology is of particular interest in this contribution aiming at de-constructing some of the major claims of big data enthusiasm; such as a claim that the exploitation of large, messy data sets allows to win more insights in a natural/self-evident way as “[w]ith enough data, the numbers speak for themselves“ (Anderson 2008). In line with this delusive view is the perception that data quality decreases in importance and finding correlation is key to come to better decision making. Big data is closely linked to the trend of “datafication” (Cukier/Mayer-Schönberger 2013) aiming at gathering large amounts of every-day-life information to transform it into computerized, machine-readable data. Behind the scenes of big data mystique and related trends there might be a new paradigm of data pragmatism on the rise as Boellstorff (2013) pointed out: „Algorithmic living is displacing artificial intelligence as the modality by which computing is seen to shape society: a paradigm of semantics, of understanding, is becoming a paradigm of pragmatics, of search”. If there is such a shift away from semantics then syntax might become more meaningful, especially for big data analysis. Together with an increase in automated decision-making big data then entails high risks of false positives and self-fulfilling prophecies, especially if correlation is mixed up with causation as the big data discourse suggests. This is inter alia visible in one of the seemingly “big” success stories, namely Google flu trends which was celebrated for its high accurate prediction of the prevalence of flu. However, as Lazer et al (2014) pointed out, in the end the prevalence of flu was overestimated in the 2012/13 and 2011/12 seasons by more than 50%. This and other examples underline the seductive power of big data to perceive it as novel tool to predict future events. If the results of predictive analytics are blindly trusted then their verification or falsification can become complicated. In particular then, if a predicted event triggers action to prevent this event. Together with developments towards predictive policing, aiming at identifying “likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions“ (Perry et al 2015), big data entails a number of serious challenges than can even strain cornerstones of democracy such as the presumption of innocence or the principle of proportionality. Threat scenarios referring to the movie “Minority report” might be overestimated. However, automated predictive analytics might increase the pressure to act and challenge to identify the red line between appropriate intervention and excessive pre-emption.

Big data algorithms (e.g. mapreduce) are most likely to be probability calculating pattern-recognition techniques. From a meta-perspective, big data might be understood as a pave maker for a new techno-determinism that is capable of re-shaping the future by transforming possibilities into probabilities. In this sense, big data might become a new source of political, economic and military power (Zwitter/Hadfield 2014). Implications range from sharpened views on realistic options for decision-making to constrained rooms of possibilities that impact privacy, informational self-determination and autonomy of the individual. Together with its “supportive relationship with surveillance“ (Lyon 2014) big data can reinforce a number of related threats, such as blurring boundaries between personal and non-personal information, de-anonymization and re-identification techniques (cf. Strauß/Nentwich 2013) and risks of surveillance such as profiling, social sorting and digital discrimination.

Big data represents a new source of networking power which (as every technology) can be boost or barrier to innovation in many respects. The “shady side” of winning new insights for decision-making may be new power asymmetries where a new data pragmatism celebrating quantity and probability curtails quality and innovation. To reduce the risks of big data, its likely reasonable reconsidering the thin line between overestimated expectations and underrepresented momentums of uncertainty that correlate with the big data discourse.

References and Notes

Anderson, C. (2008): in wired, http://www.wired.com/science/discoveries/magazine/16-07/pb_theory

Boellstorff, T. (2013): Making big data, in theory. In: First Monday Vol. 18, No. 10.

Boyd, D., Crawford, K. (2012): Critical Questions For Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon." In: Information, Communication & Society, Vol. 15, No. 5

Gartner Group (2001): "3D Data Management: Controlling Data Volume, Velocity and Variety" http://www.gartner.com/it-glossary/big-data/

Cukier, K., Mayer-Schönberger, V., (2013): Big Data: A Revolution That Will Transform How We Live, Work and Think. Houghton Mifflin Harcourt,

L. Perry, B. McInnis, C. C. Price, S. C. Smith, J. S. Hollywood (2015): Predictive policing: The Role of Crime Forecasting in Law Enforcement Operations.

Lyon (2014): Surveillance, Snowden, and Big Data: Capacities, consequences, critique. In: Big data & society 2014 1-13, DOI: 10.1177/2053951714541861

Lazer, D. , Kennedy, R., King, G., Vespignani, A., (2014): The Parable of Google Flu: Traps in Big Data Analysis. In: Science Magazine, Vol. 343 no. 6176, pp. 1203-1205. DOI: 10.1126/science.1248506

Strauß, S., Nentwich, M. (2013): ): Social network sites, privacy and the blurring boundary between public and private spaces, Science and Public Policy 40 (6), pp. 724-732

Zwitter, A. J., Hadfield, A. (2014): Governing big data. In: Politics and Governance, Vol. 2 Issue 1, pp. 1-2

  • Open access
  • 71 Reads
The Convergence of the Philosophy and Science of Information

1. Introduction

At the 4th International Conference on the Foundations of Information Science 5 years ago, Professor Wu Kun of the Xi’An Jiaotong University in Xi’An, China published the first compendium in English of his work on the science and philosophy of information [1]. In particular, he indicated the central role of the Philosophy of Information (PI) and its impact on science and the Philosophy of Science (PS). This theory can now be usefully compared to the Philosophy of Information developed independently by Luciano Floridi in Europe [2].

At the 1st International Conference on the Philosophy of Information held in Xi’An in October, 2013, Wu presented further work on the mutual impact of the PI on science and philosophy, which he describes as the scientification of philosophy and the philosophization of science. His view is echoed by the Dutch logician Pieter Adriaans who also has observed the major impact of information on philosophy itself. You have heard Professor Wu’s presentation of his major themes in his opening talk for this 2nd International Conference on PI. I will refer to them briefly below.

2. Positioning the Philosophies of Science and Information

There are today several attitudes, negative as well as positive, that be taken toward the statement that the Philosophy of Information is the locus of a revolution in philosophy and science. I accept it as a challenge to give ontological status to the changes in philosophy and the Philosophy of Science that might result from the incorporation of concepts from Information Science. However, major revisions in what philosophy and some parts of science are and how they evolve may have to become accepted.

2.1 What is Science?

The impact of information on science, via the Information and Communications Technologies (ICTs), is a complex process in which ‘science’ should not be viewed as a single discipline, and any impact will depend on what sciences are involved. I characterize two aspects of science: 1) its rough segmentation along the lines of ‘hard’ and ‘soft’, experimental and conceptual; and 2) the relative independence of different scientific disciplines of both types. It is a statement within the Philosophy of Science and of Information that what links the process, the pattern and the content of the sciences, including that of information itself, is their informational characteristics.

The concept of scientific method is only one of those meaningful in the contemporary practice of science.

Computational methods can be and are applied routinely in all the sciences. In the human domain, it is the application of operative or organizational principles to an individual or social cognitive process to determine its dynamics, what ‘forces are at work’, that is essential for the determination of an informational commons.

2.2 What is Information?

Information is an entity or process, or set of entities and processes, that is unique in both science and philosophy. It requires acceptance as a concept that cannot be defined as an identity, but only as a dynamic interactive dualism of matter-energy (ontological properties) and meaning (epistemological properties). Cognitive processes, as well as their corresponding analyses and theories, instantiate similar dualities, of which the prime example is that of self and other.

Information is somehow associated with or constitutive of existence, but it has proven notoriously difficult to define and characterize, due to its multiple duality: it has both physical and apparently non-physical components, both a real dynamic and algorithmic descriptions. I note that both science and philosophy involve the observation of regularities in nature which only differ in the degree of certainty to which can be ascribed to them. Greater rigor in philosophy does not come easily; however, the properties of information common to both science and philosophy can be used to reconcile the physical, scientific properties of information with its epistemological, philosophical characteristics as a carrier of meaning. Both a physics (science) and mutually consistent philosophy of information are required and that both the philosophy and science of information must inform one another.

3. Semiotics

Semiotics, the study of signs as categorizing linguistic entities and processes in the representation of meaning, has a position intermediate between the philosophy and science of information. It therefore has a role to play as a system of classification which complements the general things about the universe that we learn from some scientific facts about it. (The relation between Semiotics and Information Science will be discussed in detail at a separate Roundtable in this Conference stream).

Sören Brier argues [3] for a transdisciplinary framework where signs, meaning and interpretation are the foundational concepts within which information concepts have to function, and that C. S. Peirce’s concept of semiosis creates such a new paradigmatic transdisciplinary framework. This semiotic doctrine, however, can be criticized as giving a more central role to signs as representations of reality than to the dynamic properties of reality itself.

Standard semiotic theory is particularly concerned with explicating higher-order concepts such as meaning, sign use, representation, language, intersubjectivity, etc., along with their interrelations. Cognitive Semiotics, as developed by Jordan Zlatev and others brings in empirical research which can both contribute to their explication and, at the same time, produce new insights. Cognitive Semiotics is thus less dependent on any particular semiotic theory. Its language remains more that of phenomenology than science, but it can be developed together with the transdisciplinary theories outlined below.

4. The Convergence of Science and Philosophy

4.1 Transdisciplinarity. Some New Theories

In recent papers, Pedro Marijuan [4] has called for the transdisciplinary incorporation of insights from several sciences, an ‘intertwining’ of disciplines to enable further understanding of information and the foundations of information science. I argue that the Philosophy of Information can be included in this process as a consequence of the convergence of science and philosophy under the influence of information science, due to the properties of information itself.

From a philosophical perspective, what I consider the major recent advances in the philosophy of science, almost all are realist. Information processes are described as involving qualitative and quantitative changes in the amount and value of something irreducibly real in information.

Terrence Deacon [5, 6]: information as ‘absence’; from dynamics to teleodynamics; Luhn [7]: the causal-computational theory of information; embedding of the individual human being in an informational reality with a dual structure; Hofkirchner [8]: the re-ontologization involved in the different aspects of the informational revolution in progress; Brenner [9]: an extension of logic to real phenomena (Logic in Reality; LIR) that allows inferences about the energetic-ontological properties of information rather than truth-conditions.

A trend or tendency, e.g., toward the greater acceptance of non-standard logic in science, is not something that can be proven in ‘hard’ science. Nevertheless, if it is stated rigorously, a dialogue may be possible between the proponents and deniers of the trend. It is a corollary of LIR that both realist and anti-realist positions will always exist.

4.2 Wu Kun and the Informational Turn

Wu Kun has clearly brought out the ontological impact of information on philosophy. In his recent papers, Wu Kun presents detailed arguments for a new perspective on philosophy and science and the changes which they are undergoing under the influence of the informational activities of the society. I indicate here five position statements that constitute the ‘backbone’ of Wu’s metaphilosophical theory [10]:

Mind – Matter Dichotomy

With some notable exceptions, the bulk of philosophical doctrine is still based on the categorial separability of mind and matter. Despite advances in neurobiology and related sciences, the laws of reasoning and logic remain isolated from physical science, a part of semiotics as discussed above. Wu’s first contribution to the discussion is to show that the existence of information requires a resegmentation of the existential field, making the mind-matter dichotomy on which much current thinking is still based untenable.

The Science – Philosophy Dichotomy

The concomitant acceptance of the philosophical duality of information and the recognition of its physical duality abrogate any absolute separation between science and philosophy. This principle, within the Philosophy of Information (PI), is becoming applicable to science and consequently the Philosophy of Science (PS), as well as philosophy itself.

The Position of Information in Reality. Properties

Any complex real entity, e.g., a person, can be considered as constituted by the totality of the informational processes, past, present and potential in which he is involved. The intermediate stages which bridge the gap between external and internal reality are all informational. This approach is consistent with Deacon’s approach to the hierarchy of dynamics and second-order constraints necessary for the emergence of life.

Implications for the Information Society

Wu sees the multi-dimensional informational structures and processes in the society as reducing domination by central governmental control. They thus support an increase in principle in democracy, including information as a commons. Further work is needed, however, to determine if there is a direct correlation between the operative principles of Wu’s Philosophy of Information and the political change necessary to implement them.

The Informational Turn

As a discipline, Information Science has a unity by virtue of its spanning human knowledge from philosophy to science and engineering, with both vertical and horizontal relations between its component sub-disciplines. The further integration of Information Science and the Philosophy of Information implies a major Informational Turn in the current practice of both science and philosophy.

5. Conclusion and Outlook

In this paper, I have suggested that the unique dualism of information has ipso facto major implications for science and philosophy as a new form cognitive object that tends toward their mutual integration. In the conception of Wu Kun, this perspective of the ‘scientification of philosophy’ and the ‘philosophization of science’ is not intended to eliminate the specificity of both disciplines nor their individual development at a theoretical level, but requires the acceptance of the non- separability of certain kinds of science and philosophy. The consequence may be an improved understanding the ethical and social level of a more logical approach, in the sense of relation to reality, to eventual resolution of on-going conflicts in the information society.

An urgent task, then, is to find new ways of correlating and organizing the insights obtained from the corresponding different perspectives, directed not toward some impossible unity but a new functional form of knowledge. The output of this Summit should be exemplary in combining method and content to begin to fulfill the promise of information.

References

  1. Wu, K. 2010. The Basic Theory of the Philosophy of Information. Paper, 4th International Conference on the Foundations of Information Science, August, Beijing.
  2. Floridi, L. 2010. The Philosophy of Information. Oxford: Oxford University Press.
  3. Brier, S. Cybersemiotics: Why Information is not Enough; University of Toronto Press: Toronto, Canada, 2008
  4. Marijuan, P.C. The Uprising of the Informational: A New Way of Thinking in InformationScience. Presented at 1st International Conference in China on the Philosophy of Information, Xi’an, China, 18 October 2013.
  5. Brenner, J.E. 2012. The Logical Dynamics of Information; Deacon’s “Incomplete Nature”. Information 2012, 3, 676-714
  6. Deacon, T. W. 2012. Incomplete Nature. How Mind Evolved from Matter. New York: W. W. Norton & Co.
  7. Luhn, G. The Causal-Compositional Concept of Information Part I. Elementary Theory: From Decompositional Physics to Compositional Information. Information 2012, 3, 151-174
  8. Hofkirchner, W. Emergent Information: A Unified Theory of Information Framework; World Scientific: Singapore, Singapore, 2013.
  9. Brenner, J. E. 2013. Information: A Personal Synthesis. Information 2014, 5, 134-170.
  10. Wu, K. 2013. The Development of Philosophy and its Fundamental Turn. Presented at 1stInternational Conference in China on the Philosophy of Information, Xi’an, China, 18 October 2013. (Originally published in Journal of Humanity 5:1-6).
  11. Wu, K. and Brenner, J. E. 2014. The Informational Stance: Philosophy and Logic. Part II From Physics to Society. Logic and Logical Philosophy 23: 81–108.
  • Open access
  • 68 Reads
Emergent Innovation: Sustainable Innovation as Learning From the Future as It Emerges. On Uncertainty and Creating Opportunities by Exploring Potentials and Adjacent Possibles

Rethinking Innovation

Most approaches in innovation are extrapolating past experiences into the future or they use "wild" out-of-the-box-thinking methods for generating new knowledge or perspectives. However, how can we create something radically new and at the same time something that has been “waited for”, although nobody has explicitly known or seen it; i.e., something that—despite its perhaps radical newness—appears just in the right time at the right place (“kairos”) and organically fits in the existing environment (be it a market, an organization, a culture, or society). The challenge is to create something that is both radically new and fits into existing patterns of perception and thought. It is a kind of "innovation from within", an innovation that—despite its novelty—respects what is already there and makes use of its potentials for creating something new in a sustainable and thriving manner. We refer to this alternative approach to innovation as Emergent Innovation (Peschl & Fundneider, 2008, 2013, 2014). Both its theoretical foundations and a concrete well-proven innovation process will be introduced.

Emergent Innovation and adjacent possibles

Besides other approaches from the fields of innovation, cognitive science, and epistemology, this approach is based on S.Kauffman´s concept of adjacent possibles (Kauffman, 2014) and C.O.Scharmer’s Theory U (Scharmer, 2007). Whenever we are dealing with the challenge of sustainable radical innovation, we are confronted with a kind of uncertainty, in which the future is not only unknown, but also unknowable. In such a perspective both search- and solution-spaces are unknown, and even “worse”, they change permanently as they are continuously co-created in a process of interaction between environmental structures/dynamics, potentials, and cognitive system(s) (and their evolving needs). Comparing this situation with evolutionary dynamics, “…we do not know what all the possibilities are for such preadaptation, so we do not know the unprestatable and ever changing phase space of evolution. Not only do we not know what “will” happen, we do not even know what “can” happen.” (Longo, Montevil, & Kauffman, 2012, p. 1384) For such a tricky situation, it is necessary to create niches or spaces for potentials (in Kauffman´s (2014) terms “adjacent possibles”; see also (Felin, Kauffman, Koppl, & Longo, 2014; Koppl, Kauffman, Felin, & Longo, 2014)) in which innovations may emerge.

Towards an epistemology of potentiality

It will be shown that a new kind of “cognition and epistemology of potentiality” is needed in order to accomplish such processes as “learning from the future” and “listening to the future as it emerges”. It involves a whole new set of cognitive abilities, attitudes and epistemological virtues, such as radical openness, deep observation, skills of deep understanding, reframing, identifying and cultivating potentials, etc.

The second part of this talk presents the Emergent Innovation approach that applies these theoretical concepts in a concrete process design. It is a socio-epistemological innovation technology bringing forth profoundly new knowledge and innovations having the qualities explicated above. The practical concepts, the implications as well as the learnings will be discussed.

References and Notes

Felin, T., Kauffman, S. A., Koppl, R., & Longo, G. (2014). Economic opportunity and evolution: beyond landscapes and bounded rationality. Strategic Entrepreneurship Journal, 8(4), 269–282.

Kauffman, S. A. (2014). Prolegomenon to patterns in evolution. BioSystems, 123(2014), 3–8.

Koppl, R., Kauffman, S. A., Felin, T., & Longo, G. (2014). Economics for a creative world. Journal of Institutional Economics, 2014, 1–31.

Longo, G., Montevil, M., & Kauffman, S. A. (2012). No entailing laws, but enablement in the evolution of the biosphere. In Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation (pp. 1379–1392). Philadelphia, PA. (doi:10.1145/2330784/2330946)

Peschl, M. F., & Fundneider, T. (2008). Emergent Innovation and Sustainable Knowledge Co-creation. A Socio-Epistemological Approach to "Innovation from within". In M. D. Lytras, J. M. Carroll, E. Damiani, Tennyson, D, Avison, D, & Vossen, G. (Eds.), The Open Knowledge Society: A Computer Science and Information Systems Manifesto (Vol. CCIS (Communications in Computer and Information Science) 19, pp. 101–108). New York, Berlin, Heidelberg: Springer (CCIS 19).

Peschl, M. F., & Fundneider, T. (2013). Theory-U and Emergent Innovation. Presencing as a method of bringing forth profoundly new knowledge and realities. In O. Gunnlaugson, C. Baron, & M. Cayer (Eds.), Perspectives on Theory U: Insights from the field (pp. 207–233). Hershey, PA: Business Science Reference/IGI Global. (doi:10.4018/978-1-4666-4793-0)

Peschl, M. F., & Fundneider, T. (2014). Evolving the future by learning from the future (as it emerges)? Toward an epistemology of change. Behavioral and Brain Sciences, 37(4), 433–434.

Scharmer, C. O. (2007). Theory U. Leading from the future as it emerges. The social technology of presencing. Cambridge, MA: Society for Organizational Learning.

  • Open access
  • 72 Reads
How Information Lost Its Meaning (and How to Recover It)

The technical concept of information developed after Shannon (1948) and those who followed has fueled advances in many fields, from fundamental physics to bioinfomatics, but its quantitative precision and its breadth of application have come at a cost. It has impeded its usefulness in fields distinguished by the need to explain reference and functional significance, such as evolutionary biology, cognitive neuroscience, the social sciences, and even the humanities. In order to provide the foundation for a theory of information that is sufficiently precise and formal to serve these needs it is necessary to expand and slightly reformulate the technical concept of information in a way that accounts for these properties, which are not intrinsic statistical properties of the conveying medium. Ironically, I will argue that this nevertheless requires attending to physical properties of the information medium with respect to those of its context—and specifically the relationship between thermodynamic and information entropies. I will argue that referential information is a function of the thermodynamic openness of the medium to physical work and thus modifiability of the constraints that it embodies. Susceptibility to physical work is also the relevant measure when it comes to assessing the significance or usefulness of information, since this is reflected the amount of work “saved” as a result of access to information about certain contextual factors. This suggest a way to re-integrate these previously set aside properties of reference and significance into a rigorous analysis of information that can be made compatible with the full range of semiotic issues.

Top