Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 33 Reads
Can Cybersemiotics solve the problem of informational transdisciplinarity?

A transdisciplinary theory for cognition and communication has at least to be described from the following paradigms 1. An objective information processing view or info-mechanicism because it fits the findings and demands of the natural and technical sciences; 2. A social constructivist view, because it fits the findings and demands of the social sciences focusing on communication and culture; 3. A systemic cybernetic view as it relates to general system theory’s emergent evolution theory, Bateson’s pattern that connects all living beings and the autopoietic nature of living organisms, which makes them resistant to the transference of objective information; 4. A Peircean semiotic paradigm including biosemiotic, because it has a realist hermeneutical concept of meaningful communication with a phenomenological foundation encompasses all living beings. But, each approach has its transdisciplinary shortcomings.  Is it possible consistently to integrate these approaches into a transdisciplinary framework that integrate phenomenological and hermeneutical aspect in for instance Peircean semiotic logic with cybernetic and systemic autopoietic emergentist process-informational view of which Cybersemiotics is one attempt?

 

 

  • Open access
  • 46 Reads
Information Dynamics, Computation and Causality in Reprogramming Artificial and Biological Systems

In this talk, I will explain how algorithmic information theory, which is the mathematical theory of randomness; and algorithmic probability, which is the theory of optimal induction, can be used in molecular biology to study and steer artificial and biological systems such as genetic networks to even reveal some key properties of the cell Waddington landscape, and how these aspects help in tackling the challenge of causal discovery in science. We will explore the basics of this calculus based on computability, information theory and complexity science applied to both synthetic and natural systems.

  • Open access
  • 22 Reads
Regeneration of Information: A Model-Theoretic Approach

The Regeneration of Information: A Model-Theoretic Approach

Regeneration is an operation whereby an organism restores a segment that has been severed from one of its limbs. Analogs to this process can be found in non-biological contexts, such as restoration. When an area in a painting has faded or been damaged, experts apply a variety of techniques in their efforts to restore the original work. Image processing provides yet another context; faced with a photograph from which a portion is missing, we can use a number of methods for its restoration. Another example, very relevant to vision, can be found in the eye's "filling in", or completion, of missing visual information. In order to describe what takes place in regeneration, we use words such as "extrapolation" or "interpolation" to indicate what we mean. This does not yet approach formal analysis, however, nor any methodical investigation of the operation.

 

  1. Informal Motivation

In my paper I wish to propose a definition for a certain type of regeneration, and will do so in general terms of the concept of model theory. More specifically, I wish to apply here an analysis of expansion of functions in mathematics and definitions of the process of natural expansions to the study of regeneration in general (Buzaglo 2002). I will propose several requisite concepts and point out a number of basic characteristics and questions.

The type of regression that I intend to treat can be found in the following example. Consider the natural numbers with addition and multiplication. It is important that they be given as a set on which functions are defined, and that they conform to certain laws. Now, "erase" an  In the new situation, let us assume that 1 + 7 do not equal 8 and becomes, in effect, undefined. We have obtained a model in which the function of addition is partially defined.  Something in this model is "flawed" and yet it is, nevertheless, given to repair. Should we wish to retain the commutativity of addition in the world without 1 + 7 we will have no choice but to expand upon 1+ 7 to equal 8.

Note that this provides no proof of the fact that 1 + 7 are 8 but rather an expansion of the function's scope of definition. The distinction is clear. While in the case of proof we seek to determine the truth value of the claim defined, here we face a situation in which a function is used to define something that previously was undefined.

We can also erase 7 +1 and it will still be possible to repair the model, only that in this case we shall not approach the law of commutativity but the law of associativity. Erasing the second equation requires that we seek an alternative avenue for regeneration. It is likely that should we continue to erase the model will still be reparable and restorable, but should we erase a sufficiently large part it will be impossible to restore the model to its previous state.  This is what we will also find in the case of biological regeneration. We know that if we sever a small piece of a lizard's tail that the lizard will succeed in restoring it, but sever a large enough piece and the restoration will fail. There are also levels in regeneration: While there are organisms with the wondrous capacity to restore different limbs, mammals have limited abilities of regeneration. So it is with the restoration of images: If we have the beginnings of a drawing of a hand and additional parts of the image we will, to a certain extent, be able to complete it. If we have removed a small spot of color, it seems that there are not too many means of restoring it. Something similar is shared by the image and the mathematical model. So it is with music; if we omit a note there are many ways to replace it. In any case, so it is in certain musical styles.

  1. A Formal Definition

These analogies hint at the connection between the mathematical case of the restoration of functions and the restoration of an image. In view of this, let us try to abstract and define the regeneration of a function, and then to complete the list of requisite definitions.

The removal of an object from the domain of a function's definition requires no special effort. We need but to accept the existence of models with partial functions. The more important question is that of the idea that the model knows how to regenerate itself. The model looks into itself, as it were, and finds a certain constancy which enables it to restore itself. Here I would like to implement a concept of natural expansion, or forced expansions that have been studies in (Buzaglo 2002). Let us take the complement of a function where the model imposes a definition. I am thinking here of the expansion of the power function on 0:

  1. 20=1

This is forced by the rule B:

  1. 2x:2y=2x-y

In this case we will write that the rule B. imposes the expansion.

However - and this simple though meaningful - this is exactly what we are doing in the case of restoring 1 + 7 = 8

where here in the role of B we use the commutative law C:

  1. x + y=y + x

 

With this in mind we indroduced the following notation "F(L,h(a)=b)" means the law L forces the expansion h(a)=b. Thus, in the examples above:

"F(B, 20=1)" is true in the appropriate model and "F(C, 1+7=8)" is true in another model.

 Given a model M for a first order language L that includes functions and identity, we can expand L with expressions of the form "F(K, h(t)=s) where "h( )" is a function symbol and t, s are terms  and define:

M ⊨F(K,h(a)=b)

A rigorous definition is given in (Buzaglo 2002, p. ) but informally we explain that there is an expansion of the domain of h( ) that saitsfy K, and that any manner of expanding the model so that it will retain K must necessarily be consistent with h(a)=b.

Now we may define the main concept:

Definition: Partially Regenative at h(a)=b

Let model M for language L which includes the expression "h( )" and wherein h(a)=b. We will say that the model is partially regenerative at point a and function h ( ) if, following the removal of a from the domain of the definition of h, there is a Law C, that imposes h(a)=b anew.

More specifically:

M⊨ h(a)=b

M' is the model after erasing a from the domain of h( ).

And there is a sentence C

M'⊨F(C,h(a)=b).

One may limit, in advance, those statements that impose expansion, and say that C is a universal law. From here onward we will simplify and assume this unless otherwise stated.

On this basis, we can add various definitions. Some of them are given to definition in a very natural way. The model is partially regenerative on a subset of a model A and not only at a point h(a)=b. Another way is to strengthen the way the model restore itself.

 

 

Definition: Universally Regenerative at h(a)=b

 M is partially regenerative at h(a), and all the possible ways to restore h(a), by a universal law are compatible with h(a)=b.

In the paper I wish to explore the potential of the definition above. There are two channels we can follow: we can either study the logic of regeneration in the usual way we study constructions and  concepts in model theory, or we may look at cases of regeneration and extract from them possible guidelines for this exploration. The interesting part is, of course, where these strategies converge. In the following I will start with few steps in each direction.

References 

Buzaglo, M., 2002, The Logic of Concept Expansion, Cambridge University Press.

Dennett, D. (1992). "Filling in" versus finding out: a ubiquitous confusion in cognitive science. Cognition: Conceptual and Methodological Issues. v. d. B. P. Pick HL Jr, Knill DC. Washington DC, American Psychological Association: 33–49.

Ramachandran, V.S.; Gregory, R.L. (1991). "Perceptual filling in of artificially induced scotomas in human vision". Nature. 350 (6320): 699–702.

  • Open access
  • 19 Reads
Sustaining Digital Adaptation

Introduction

In response to “Digital Darwinism”, people, organisations and society need to adapt to the different characteristics of digital information. Successful adaptation requires an understanding of the nature of information and the information ecosystems (just “ecosystems” when the context is clear) that develop under a range of selection processes. Ecosystems develop conventions that determine the pace, quality and friction with which information is processed. These conventions are embedded in the structure and communications of ecosystems and the entities that comprise them; we call them Interacting Entities (IEs)—these include people, computer systems, animals, organisations or parts of organisations.

Ecosystem inertia means that these ecosystem conventions cannot often respond fast enough to digital change.  Digital information offers the potential of faster pace and reduced friction, but to achieve these, and also build in the necessary information quality, ecosystems may require extensive change to respond to different and diverse interactions. 

To make these changes sustainable, they need to be compatible with the changing selection pressures resulting from digital change. IEs need to be dextrous to respond fast enough to the environment (the term "dextrous" avoids confusion with the specific meaning of the term "agile" in the example used below). This paper considers the impact of digital change using an analysis of the structure and flexibility of IEs in information ecosystems and shows how to sustain digital adaptation. The ideas are applied throughout the paper to an IT department (as an IE) to highlight recent ideas about software and systems engineering (Agile and DevOps—so-called because it forges a much closer relationship between Development and Operations) in a digital environment. 

Selection and Information Ecosystems

IEs interact with their environment through channels—they sense the environment and decide how to act. The interactions of an IE and their impact on outcomes determine the ability of the IE to thrive (or not)—they define the selection pressures on the IE. Favourable outcomes will enable IEs to create derived IEs (eg children, updated software product versions, changed organisations) through some development process. In this way, interactions determine how IEs evolve in ecosystems. The activity of other IEs (eg competitors) impacts the interactions available to an IE (and vice versa) especially when resources are scarce.

Information enables IEs to link the current environment state with potential outcomes and the actions potentially able to provide these outcomes. So, IEs deal with descriptive, predictive and prescriptive information and the connections between them. 

Information ecosystems develop their own conventions for sensing, processing and acting using these different types of information. These conventions support different ranges of information measures (pace, friction, quality) that result from trade-offs driven by the selection pressures. 

Complex IEs include different components (IEs in themselves) that exhibit different conventions. For example, businesses have Sales and Marketing, Finance, HR and IT departments. Each contains a mix of the ecosystem conventions of the business and those of the particular discipline. Each may contain smaller components; for example, an IT department may contain separate Development and Operations components. Selection pressures may apply differently to the different components of an IE. 

We can categorise the selection pressures on IT organisations in the following terms:

  • business selection pressures (“quality” in terms of the alignment of IT with business strategy);
  • user selection pressures (“quality” in terms of system and service quality and reliability);
  • efficiency selection pressure (“friction”);
  • responsiveness selection pressure (“pace”).

If an IT organisation does not deliverable favourable outcomes for its organisation in response to these pressures then there may be a range of implications. The business may not be able to deliver its strategy or operations efficiently and effectively. For the IT department, people may lose their jobs, the department may be reorganised, budgets may be changed or “shadow IT” (IT managed from outside the IT department) may spread. 

IE patterns and components

The success (or not) of an IE is influenced by its pattern and the ecosystem conventions it instantiates. This includes the components it has, how they are structured and their ability to support different types of interaction. 

There are two important dexterity measures: flexibility and extensibility. Flexibility is measured in terms of the number of different interaction types that a pattern can support without change. Extensibility measures what level of resource (eg money) is needed to support other interaction types—how easy it is to change the IE.

Patterns for organisations are described in terms of an operating model that defines facets like: interaction channels, governance bodies (decision-making groups of people), performance management, processes, organisation design (structure and roles) and culture. All of these also set the internal selection pressures on staff and suppliers and they define how effectively the external selection pressures are translated into internal selection pressures—a mismatch between the two is a source of poor performance.

The interaction channels determine which others IEs an organisation can interact with and how. They determine how effectively an organisation can sense the environment and act. The flexibility of an interaction channel is determined by its reach and whether the IEs at the other end use it. The requirement to create new channels (eg a new web site) causes inertia. Flexility is supported by the ability to change interactions easily as part of existing processes (eg to post new content rather than re-create a web site).

Governance bodies have a fundamental role in delivering dexterity because they decide much of what will be done (eg financial approval to proceed with changes). The degree to which they do so depends on their terms of reference, the information available to them (and the degree to which it captures friction, quality and pace), the attendees and the frequency with which they meet.

The degree to which performance management supports flexibility is determined by the objectives defined, how they are assessed and the degree to which they reflect the selection pressures. Performance management can drive unanticipated behaviours when people “game” the measures or when the measures do not reflect the external selection pressures. Matching external selection pressures accurately is difficult to achieve, especially as they change. Performance management can reinforce discipline ecosystem conventions at the expense of external selection pressures. And, compared with the minute-to-minute impact of culture, performance management can be a blunt, infrequent tool to create internal selection pressures.

An organisation design aligned with functions (eg Development, Operations) can prioritise discipline ecosystem conventions over the market. By contrast, market orientation can (without care) ignore elements of good functional practice. However, with a functional design it is more difficult to align internal selection pressures with external selection pressures.

Processes support flexibility only if they are designed to do so and the implementation accurately reflects the design. Anyone who has called a poor quality call centre will understand the impact of inflexible processes and the constraints imposed by underlying systems. Process flexibility often demands flexible access to information rather than the limitations imposed by many systems.

Culture is the often unrecognised driver of success because it operates continuously (unlike many of the other facets) and so it provides ever-present selection pressures. It has been defined as “The specific collection of values and norms that are shared by people and groups in an organization and that control the way they interact with each other..”. In other words it is at the core of ecosystem conventions in organisations. However, it can be difficult to change.

We can summarise these points in the following way: the facets create inertia when they are driven by friction and quality at the expense of pace. Often the facets are defined infrequently and changed only in response to major problems (often in a way that decreases pace). But culture is an insidious presence and it can determine the extent to which the others will enable dexterity.

For IT organisations, the prevailing operating models and conventions have been defined by the implementation of IT Infrastructure Library (ITIL) processes and phases of outsourcing, amongst other factors. As conventions have developed over decades, many issues have developed that indicate a mismatch in external and internal selection pressures, for example:

  • unnecessary working practices developed in response to particular events and never discarded in the light of changed circumstances;
  • “watermelon” performance—green (good performance) on the outside, according to service reports perhaps, but red (poor performance) on the inside as perceived by business users;
  • missing interactions—in which the IT organisation does not interact sufficiently on (for example) strategic questions;
  • mismatched measures—in which pace and quality were traded-off against perceived reductions in friction (ie cost-driven) or quality was traded off against perceived pace (ie delivery on time whatever the consequences);
  • retreat into silos—in which selection pressures have focused separately on individual components rather the whole, respecting the discipline ecosystem rather than that of the organisation as a whole and causing difficulties and gaps between components.

Digital information

Digital Darwinism is caused by fundamental changes to external selection processes. Digital information enables the same outcomes to be achieved in new ways with reduced friction and also enables new outcomes to be achieved; it stimulates a faster rate of change in the environment and a consequent requirement for IEs to increase their responsiveness. Increasingly, digital technology (including examples like machine learning) enables new approaches to information quality. 

Digital technology has created numerous new channels and sources of information (eg the world-wide web, social media) through new devices. Digital information supports different types of information—initially the focus was on descriptive information (eg customer address) but increasingly also predictive information (eg propensity to buy) and prescriptive information (eg automated action).

Under the influence of Moore’s Law, digital technology has driven down friction by several orders of magnitude. As friction has been driven down, so increases in pace have been enabled—it has been possible to interact much more frequently and more responsively.

Quality was one of the initial casualties of reduced friction. Quality is too hard to assess routinely unless there is a very good reason so “quality by proxy” is the norm. Such norms have not yet been established for many new channels and quality has been assumed rather than proven. However, machine learning, amongst other approaches, can improve quality in particular domains by enabling types of prediction not available to people (relying, for example, on the analysis of very large amounts of data).

Sensing the environment has also changed dramatically. A proliferation of channels and information volume (“big data”) without understood levels of quality provides a new challenge in terms of measuring meaningful events in the environment (ie determining signal to noise). However access to so much data, with new tools (eg sentiment analysis), can provide much faster understanding.

In response to digital change, many organisations suffer from inertia and can no longer respond quickly enough. Recently, improving pace—the ability to respond fast and accurately to the environment—has been the focus for many organisations as part of their digital transformation.

Change and dexterity

The challenge for organisations is to become more dextrous—with a pattern that makes them responsive to change. Flexibility and extensibility are key measures of dexterity. But the nature of the change needs to be clearly understood—it is important to sense the environment and determine the external selection pressures with sufficient certainty to be able to define the required internal selection pressures and operating model required. 

IT has traditionally applied internal selection processes based on traditional engineering practices. But these were not designed to deal with the pace of digital change. Digital Darwinism requires:

  • a short cycle time—the ability to turn new requirements into IT services quickly;
  • the ability to deliver many releases in a shorter time than previously.

Established ecosystem conventions are not compatible with these requirements and new ways of working—Agile and DevOps—have developed specifically to meet the challenge of overcoming the conventions. They use automation but also apply lean ways of working originally developed by Toyota and now used extensively in services as well as manufacturing. Lean ways of working embed principles that link internal selection pressures and external selection pressures.

Changing the pattern sustainably

Changing organisations is not easy. The impact of ecosystem conventions is that old ways of working are deeply embedded in all aspects of the operating model, stifling change. In making a change there are two factors to consider: what pattern is required and how can internal selection pressures be encouraged to stay aligned with external selection pressures as they change? 

The new pattern may require new components and, to overcome the ecosystem differences of components, new relationships between them. Each component may also require a new mix of the ecosystem conventions of the business and that of the particular discipline; or entirely new ecosystem conventions may need to develop. In particular, it is important to identify the elements of the operating model that will inhibit the change.

How would these ideas apply to an IT department? Each of the facets of the operating model may need to change overall and with respect to the different components. The Development and Operations components, often with different cultures and practices, present one of the largest challenges.

The implementation of lean techniques matches the internal selection pressures used within the IT department with the external selection pressures and applies them at a much more detailed scale. As the name implies DevOps is about Development and Operations departments working closely together despite their different cultures; DevOps has been defined as the application of lean ways of working to IT, supported by automation. Lean ways of working are an important factor making the cultural changes needed. DevOps builds on the following lean principles:

  • the “voice of the customer”—in response to the business and user (“quality”) selection pressures;
  • continuous removal of waste—in response to the efficiency (“friction”) selection pressure;
  • flow and value stream management—in response to the responsiveness (“pace”) selection pressure;
  • culture—combining responses to the selection pressures in day-to-day interactions with people rather than intermittently.

Combining these with automation (to support development, integration, testing and deployment) can provide large changes in both pace and quality and maintain the alignment of internal and external selection pressures.

  • Open access
  • 63 Reads
Habits and Affects : Learning by an Associative Two-Process
Published: 09 June 2017 by MDPI in DIGITALISATION FOR A SUSTAINABLE SOCIETY session HABITS AND RITUALS

In animal learning theory, the notion of habits is frequently employed to describe instrumental behaviour that is (among others): inflexible (i.e. slow to change), unconscious, insensitive to reinforcer devaluation (Dickinson 1985, Seger & Spiering 2011). It has also been suggested that learning using reinforcement learning algorithms somewhat reflects a transition from affect-based to more habit-based behaviour (Seger & Spiering 2011) where dual memory systems for affective working memory and standard (e.g. spatial) working memory systems exist (Davidson & Irwin 1999, Watanabe et al. 2007).

Associative Two-Process theory has been proposed to explain phenomena emergent from differential outcomes training. In this procedure, animals (sometimes humans) are presented with stimuli/objects that uniquely identify differential outcomes, e.g. a circle stimulus precedes the presentation of a food outcome, a square stimulus precedes the presentation of a toy outcome. Outcomes are, in turn, mitigated by specific responses, e.g. press the right button to obtain the food, press the left button to obtain the toy. Manipulating these stimuli, response, outcome contingencies reveals the two types of memory, i.e. one that concerns ‘standard’ working memory of stimulus-response associations, the other that concerns ‘prospective’ memory, that stimulus-expectation-response follows in a sequence.

The neural dynamic relationship between the purported dual memory structures may vary depending on the stage of learning at which the animal / human (agent) has arrived at. Previously it has been suggested (Lowe et al. 2014), and neural-computationally demonstrated, that a working memory route is critical in initial learning trials where the agent is presented sequentially with a given stimulus, action/behavioural options, and finally an outcome (e.g. rewarding stimulus or absence thereof). Subsequent trials lead to a dominance of affective (or otherwise prospective) memory that effectively scaffolds the learning of the outcome-achieving stimulus-response rules under conditions of relative uncertainty. Finally, during later stages of learning more ‘habitual’ responding may occur where the retrospective route becomes dominant and ‘overshadows’ the prospective memory.

In neural anatomical terms, candidate structures for implementing prospective memory include the orbitofrontal cortex (OFC), which is considered to enable fast, flexible and context-based learning (particularly important in studies of reversal learning, e.g. Delameter 2007). This is in contrast to the amygdala, which is considered less flexible, i.e. resistant to unlearning, but, nevertheless, critical to learning valuations of stimuli (Schoenbaum et al, 2007). Furthermore, the interplay between the basolateral division of the amygdala (BLA) and OFC may be crucial in differential reward evaluation (Ramirez and Savage, 2007). Passingham and Wise (2012) have suggested that medial prefrontal cortex (PFC) has a critical role in encoding outcome-contingent choice, whereas Watanabe et al (2007) have provided evidence for the lateral PFC integrating activation inputs from ‘retrospective’ (working memory) areas such as dorsal PFC and ‘prospective’ (outcome expectant) areas such as OFC and medial PFC.

A perspective of Urcuioli (2005, 2013) is that outcome expectancies (from prospective memory) provide a means to effectively classify stimuli. Action selection can then be simplified through exploiting affordances of the subset of those actions already associated with the outcome expectancy classes. This is a reason why participants under certain forms of differential outcomes training can immediately select the unique action that leads to the desired outcome even though the stimulus-action (response) contingency has previously not been experienced: Subjects have already classified the stimuli according to a given outcome expectancy previously associated with an action.

In this work, I discuss the associative two-process model in relation to (standard) working memory and ‘affective working memory’ (Watanabe et al. 2007) as providing a means to classify stimuli. I refer to a number of animal learning paradigms that demonstrate the potential for reward and reward omission anticipation to be associated with reward-promoting behaviour (cf. Overmier & Lawry 1979, Kruse & Overmier 1982, Urcuioli 2013, Lowe et al. 2016, Lowe & Billing 2017) and neural computational aspects of the interplay of affective (prospective) and working (retrospective) memory that may yield more habitual behaviour. I show that, within an associative two-process context, habits can also be understood in terms of affective working memory – specifically in relation to reward acquisition expectation and reward omission expectation. Habits, in this context are considered behaviours that are inflexibly selected for in spite of reinforcer devaluation and their rigidity reflects the certainty / uncertainty of a particular rewarding outcome.

I discuss the implications for such learning of habits and affective mediations of behaviour particularly regarding memory and clinical conditions (e.g. alzheimer’s) and learning children. This may be informing of new digitized solutions for intervention approaches with senior citizens and pedagogy in relation to children development.

 

References

 

Dickinson, A. (1985). Actions and habits: the development of behavioural autonomy. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 308(1135), 67-78.

 

Davidson, R.J and Irwin, W. (1999). The functional neuroanatomy of  emotion and affective style. Trends in Cognitive Neuroscience,  3: 11-21.


 

Delamater, A.R. (2007). The role of the orbitofrontal cortex in sensory-specific encoding of associations in pavlovian and instrumental conditioning. Annals of the New York Academy of Sciences, 1121(1):152–173

 

Kruse, J. M., and Overmier, J. B. (1982). Anticipation of reward  omission as a cue for choice behavior. Learning and Motivation, 13, 505–525.


 

Lowe, R., Sandamirskaya, Y. and Billing, E. (2014). The actor - differential outcomes critic: A neural dynamic model of prospective overshadowing of retrospective action control. The Fourth Joint IEEE Conference on Development and Learning and on Epigenetic Robotics, pp. 440–447.

 

Lowe, R., Almer, A., Lindblad, G., Gander, P., Michael, J., Vesper, C. (2016) Minimalist social-affective value for use in joint action: A neural-computational hypothesis. Frontiers in Computational Neuroscience, 10(88).

 

Lowe, R. and Billing, E. (2017) Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training, Adaptive Behavior, 25 (1), 5-23

 

Overmier, J. B., & Lawry, J.A. (1979). Pavlovian conditioning and the mediation of behavior. The Psychology of Learning and Motivation, 13, 1–55.

 

Passingham, R. and Wise, S. (2012). The neurobiology of the prefrontal cortex: anatomy, evolution, and the origin of insight, vol 50. Oxford University Press.

 

Ramirez, D. and Savage, L. (2007). Differential involvement of the basolateral amygdala, orbitofrontal cortex, and nucleus accumbens core in the acquisition and use of reward expectancies. Behavioral neuroscience, 121(5):896–906.

 

Schoenbaum, G., Saddoris, M. and Stalnaker, T. (2007) Reconciling the roles of orbitofrontal cortex in reversal learning and the encoding of outcome expectancies. Annals of the New York Academy of Science, 1121:320–335.

 

Seger, C. A. and Spiering, B. J. (2011). A critical review of habit learning and the basal ganglia. Frontiers in systems neuroscience, 5.

 

Urcuioli, P.J. (2005). Behavioral and associative effects of differential outcomes in discriminating learning. Learning and Behavior, 33(1):1–21.

 

Urcuioli, P. (2013). Stimulus control and stimulus class formation. In Madden, G. J., Dube, W. V., Hackenberg, T. D., Hanley, G. P., & Lattal, K. A. (eds), APA Handbook of Behavior Analysis (Vol. 1, pp. 361–386). Washington, DC: American Psychological Association.

 

Watanabe, M., Hikosaka, K., Sakagami, M., & Shirakawa, S. (2007). Reward expectancy-related prefrontal neuronal activities: Are they neural substrates of ‘‘affective’’ working memory? Cortex, 43, 53–64.

  • Open access
  • 76 Reads
Role of Happiness as a Habitual Process
Published: 09 June 2017 by MDPI in DIGITALISATION FOR A SUSTAINABLE SOCIETY session HABITS AND RITUALS

Philosophy itself is philosophizing to our experience of the world, life, or thought, and it is truly enriching our social, political, intellectual, and emotional existence. Although,  philosophers have various views on a single issue, but they still share a common interest, i.e., a critic with the comprehensive thought of approach, and therefore, ‘philosophy’ is a way to understand our life (not a way of life). Similarly, our life is based on the various kinds of habits and rituals (prayer, meditation, yoga, worship many deities, speaking multiple languages and symbols for communicating with each other, eating various foods with different cultural practices, etc.) due to the religious practices and people love to do these procedures to continue their existing diversity of cultures. Take an example of ‘Happiness’. For understanding the true nature of happiness, there are many philosophical debates on it from both the east and west perspectives, but their underlying motto is same, i.e., the continuous practice of habits. However, this paper will mainly focus on Aristotle’s understanding of ‘Eudaimonia’ (happiness) and the significant role of ‘habits’ for flourishing a happy life.

 

  • Open access
  • 41 Reads
The Movement of Habit: On Ritual and Activism
Published: 09 June 2017 by MDPI in DIGITALISATION FOR A SUSTAINABLE SOCIETY session HABITS AND RITUALS

In one of the most distinctive readings of the concept of habit, Felix Ravaisson cut through its common association with passivity by posing the problem as one of distance: considered as conscious reflection, an inclination may be said to tend towards an end or object outside it; considered as habitual, however, an inclination may be said to be much closer to the actuality it seeks to reach. As the automatism of an inclination increases, the movement and the goal almost touch each other. The result is a type of immediate creation. Something comes to being, a fusion of the real and the ideal, of the personal and the impersonal without the necessary intervention of a consciousness to will it. As it proliferates, the habit creates a world by allowing a manifold of influences to coalesce as one consistent behaviour. We might, after Peirce, call this consistency a ‘sign’ or ‘third’, born of the junction of the potential and the actual. Or, to borrow from Spinoza, we may call it expression. Far from being merely passive, then, habit may be said to infinitely approximate a supreme form of spontaneity, not unlike the ‘intelligent intuition’ that Kant had reserved for God. In other words, habit sheds its connection with psychologism to become properly metaphysical. In this paper, I examine the usefulness of such a metaphysical concept of habit for an understanding of ritual and ritualistic practice, especially in the context of recent earth activism supported by indigenous spiritualist imaginaries. I explore how ritual, tied both to the habit of communicating with nature but also with the understanding that nature produces itself in its habits, opens up the possibility of actively shaping social and political realities by ‘expressing’ or ‘signifying’ a merger with the free and creative force of the cosmos.

  • Open access
  • 106 Reads
Transhumanism: A Progressive Vision of the Future or Liberal Capitalism's Last Ideological Resort?

As an organised socio-cultural and increasingly politically active movement, transhumanism is a rather new phenomenon. It has its roots in those segments of US society in the 1970s and 1980s which – against the backdrop of wide-ranging expectations concerning the ‘Space Age’ – merged ideas and habits of the counter-culture of the 1960s with strong, often quasi-religious hopes for a future society shaped by science-fictionesque high-tech  (Schummer 2009; McCray 2012). While this proto, or early transhumanist movement already evolved within some organisational networks of structures (e.g. the L5 Society which promoted the colonisation of extra-terrestrial space), organisations specifically dedicated to the promotion of transhumanism as an encompassing worldview emerged only in the 1990s. Since then, we have witnessed some organisational re-shuffles within the movement and recently the emergence of (small) political organisations of transhumanists, including some (very small) national political parties.

In order to adequately assess the current relevance of transhumanism, it would, however, be short-sighted if we only look at the organised movement in a narrow sense. Much of its current relevance is due to fact that is embedded in a much broader socio-cultural milieu which includes, for example, major figures of the IT industry. The propinquity to transhumanism displayed by influential networks in the IT industry and other powerful elements of digital capitalism (e.g. in US science policy) has been pointed out in policy-oriented and ethical discourses on various fields of science and technology (such as nanotechnology) for quite some time now. Since the late 2000s, we have witnessed, however, a surge of broader public interest in the question to what extent transhumanism plays a role in the visions of the future, or even in the short-term business strategies of key players in digital capitalism. As such, transhumanism is often deemed a radical variant of what has been termed ‘Californian ideology’ (Barbrook and Cameron 1996).

The surge in public interest has entailed a considerable amount of mass media reporting (e.g. McCracken and Grossman 2013), which in turn created some interest by policy makers in this topic; and we have also seen an increase of anarchist, socialist or ecologist critiques, for example in France (PMO 2015) and in Germany (Jansen 2015; Wagner 2015). On the other hand, some fashionable currents of the Left, such as accelerationism, have brought forward notions of progress and of emancipation through technology closely resembling some such transhumanist notions.

This brings us to another – and, as will be argued, crucial – aspect of transhumanism, namely the fact that the transhumanist movement of our times is in many respects deeply indebted, if indeed not merely epigonic, to thinkers in the last third of the nineteenth and the first third of the twentieth century which developed, even in some technical detail, genuinely transhumanist visions of the future. For our discussion, this aspect is crucial because these thinkers tended to, or openly promoted socialist visions of the future in which the creation of a socialist world society is portrayed as the basis of a much larger endeavour of the human (and increasingly cyborgised, transhuman) species, namely the conquest of extra-terrestrial space by a civilisation in which the human intellect is embodied in technoscientific devices.

In the present paper, it is argued that, in order to answer the question raised in the title of this workshop – namely if transhumanism should be seen as a “proper guide to a posthuman condition” or deemed a “dangerous idea” –, we first have to ask in which visions of a future society transhumanism is embedded. While much of discourse on transhumanism since the late 1990s has focused on a perceived dichotomy of (ultra-)liberal and individualist, largely US-American transhumanism versus a variety of anti-individualist (conservative, ecologist or socialist) critiques of transhumanism, a historical perspective may allow us to better understand the multi-faceted ideological character of transhumanism. As has been argued (Coenen 2014), the increasing relevance of transhumanism in current discourse on science, technology and the future demonstrates that global players in today’s digital capitalism still follow an agenda which was developed in Britain in the heyday of imperialism and after the Great War as a reaction to a perceived crisis of progressive thinking and as a contribution to the establishment of technoscience in society. Notwithstanding its focus on individual choices, the ideological foundations of current transhumanism are thus collectivistic. Due to its largely quasi-religious character, transhumanism could and can be an element of politically quite different projects, such as British imperialism, scientistic communism and ‘digital capitalism’; and current transhumanism, as an ideology for technoscience, still expresses the belief in a grand narrative about the future of humankind in which technoscience is portrayed as a means of salvation.

In light of the strange fact that the transhumanist grand narrative about the future has fascinated, and continues to fascinate representatives of a wide variety of political persuasions, it appears advisable to analyse the question of the desirability of transhumanism against the backdrop of different societal visions and political stances evident in the history of transhumanism. With a view to the above-mentioned current political discussions about the role transhumanism in our ‘digital age’, we may then first ask if transhumanism provides us with a progressive vision of the (far) future of our species, or if it should better be deemed liberal capitalism's last ideological resort, competing with nationalist and (openly) religious ideologies. On the basis of such an analysis, more specific questions concerning the desirability of transhumanism can be raised, for example with regard to the potential consequences of its rise for the goal to create a sustainable global society.

References:

Barbrook, R., Cameron, A. (1996): The Californian Ideology. Science as Culture 6(1), 44-72

Coenen, C. (2014): Transhumanism and its Genesis: The Shaping of Human Enhancement Discourse by Visions of the Future. Humana.Mente. Journal of Philosophical Studies 26, 35-58

Jansen, M. (2015): Digitale Herrschaft. Über das Zeitalter der globalen Kontrolle und wie Transhumanismus und Synthetische Biologie das Leben neu definieren. Schmetterling, Stuttgart

McCracken, H., Grossman, L. (2013): Google vs. Death. Time, 30 September 2013 (title story)

McCray, P. (2013): The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future. Princeton University Press, Princeton

PMO (Pièces et main d’œuvre) (2015): Transhumanisme: du progrès de l’inhumanité; http://www.piecesetmaindoeuvre.com/IMG/pdf/Transhumanisme_inhumanite_-2.pdf

Schummer, J. (2009): Nanotechnologie. Spiele mit Grenzen. Suhrkamp (edition unseld), Frankfurt am Main

Wagner, T. (2015): Robokratie: Google, das Silicon Valley und der Mensch als Auslaufmodell. PapyRossa, Köln

  • Open access
  • 58 Reads
Aspects of mind uploading

I will discuss various aspects of mind uploading, including practical (will it be possible?), ethical (is it morally permissible to develop uploading technology?), philosophical (if I upload, will the upload be conscious, and will it be me as opposed to merely a copy of me?) and sociological (what will a society dominated by uploads be like?).

  • Open access
  • 79 Reads
“Alternative Facts” and “Fake News”: cultural studies’ illegitimate brainchildren

Looking at the state of the Humanities today, a number of the demands by Cultural Studies theorists, from Birmingham to Chapel Hill have been met. In the western world, people – even outside academia - tend to accept that truth is not absolute, that culture is a construct and many have become aware that there is a continuous struggle for hegemony in discourse. Add to that that Stuart Hall’s vision of a world in which the media finally is a free for all who want to make themselves heard has come true. The internet has made it much harder to exclude marginalized communities. This finding is not altogether wrong, as the internet was central to mobilizing protest in e.g. the Arab Spring revolt.

Yet discourse has not become more rational. What we also see is a triumphant return of right-wing movements, which - to reference Rainer Zimmermann – engage in “savage thought” on and through the net. And while I concur with him that their dominant discourse is irrational and abandons facticity, I am arguing that all this is directly related to the situation of the Humanities. The arguments of the latest wave of right-wing intellectuals show an embarrassing kinship with those of the left-wing Cultural Studies Project. White supremacy ideologue Richard Spencer (a former student under Marcuse disciple Paul Gottfried), e.g. reconciles, with ease, liberal ideas of identity politics with racism. But what may be more instructive will be a discussion of two phenomena that the right loves to exploit in their struggle for hegemony: “alternative facts” and “fake news”. Although lies and canards are as old as journalism itself, it will become apparent that they have evolved into new breeds in the digital age. They also are cultural studies’ illegitimate brainchildren. The dynamics of the internet in combination with the belief in constructivism is proving to be toxic.

My presentation will look at the “alternative facts” and “fake news” of the alt-right in the USA and show how their creators make use of postulates and practices more commonly associated with cultural studies (and postmodernist thought).

Top