Theoretical background

Theoretical background

The theoretical background of the project is structured within the confines of the following aspects (1) the importance of the problem from the scientific, technological, socio-economic or cultural point of view; (2) the main difficulties encountered in dealing with the problem; (3) the limits of the current approaches in the context of the state of the art in the field.

(1) Our moral intuitions and dispositions dictate responses to new and emerging technologies. There is a growing body of empirical findings that point out to the systematic errors of moral intuitions. As a consequence, the methodology of moral reasoning and the reliability of our moral intuitions as solutions to ethical problems are at stake. We focus on three fields of technological innovation which have generated intense controversy: information technologies, neuro and bio-enhancement, and robotic embedded Artificial Intelligence. What is common to this triad is the capacity to expand human possibilities beyond the natural and cultural moral endowment and abilities. These technologies are transformative, not only operative (Dreyfus 2008, Heim 1993, 1999, Borgmann 2009, 2013, Solcan 2008, 2014), because they reshape human practices in radical ways.

The “fourth revolution” (Floridi 2008, 2012, 2014), informational in nature, universal and global as a total social phenomenon, is still occurring. Identity, seen either as a deep philosophical problem of the person or as a psychological and social necessity, is primarily at issue (Schoemaker 2010, Floridi 2011, Rodogno 2012, Borgmann 2013). Today, our moral intuitions are challenged by the “illegal” flow of information between users through peer-to-peer networks (Langille & Eisen 2010). Furthermore, in the future, the network will link objects, humans, things, bits and brains around the world (the Internet of Things). Until then, the connected world is already under a deluge of “datafication” (Mayer-Schönberger & Cukier 2013) and digital profiling practices (Tavani 2007, Nissenbaum 2010, Dijk 2010) that are opaque to the mere intuition. But the most bewildering fact is that the right to be forgotten (Rees & Heywood 2014) is claimed as much as the digital afterlife (Moreman & Lewis 2014). Problems around information technology elicit various and contrary moral intuitions.

After the digital turn, new and emerging enhancing technologies are increasingly promoted for moral purposes. Deep-brain stimulation (e.g. electrical stimulation of the amygdala) is a means of reducing aggression; neurofeedback increases sympathy and/or treats antisocial behaviour (DeGrazia 2012). Some have argued that moral enhancement is not only morally permissible, but should be viewed as a moral imperative. Persson and Savulescu (2008, 2010, 2011 a, b, c, 2012) argue that our moral dispositions are limited and the moral norms to which they have given rise make us ill-suited for the contemporary world. By contrast, there have been strong counter intuitions to moral enhancement charging that it is a threat to human freedom and to what is most dear in human life (Sandel 2009, Harris 2011). These contrasting hopes and fears have already generated intense controversy due to the strong conservative intuitions which have been elicited (Douglas 2008).

Robotics and AI are also entering the public stage. After 50 years of research and never ending debates on what are intelligence and consciousness, robotic embedded-AI is becoming reality. From warfare (Hellström 2013, Noorman & Johnson 2014; Olsthoorn & Royakkers 2014) to artificial companionship (Pearson & Borenstein 2014), robots will be used for different purposes. From all of them, social companion robots are the most morally troubling: they are expected to care for, entertain and help us as real humans do (Coeckelbergh 2009, 2012; Weng, Chen & Sun 2009; van Wynsberghe 2011; Sharkey & Sharkey 2014; Sharkey 2014). The relationship with a robot is the last frontier of the moral realm to be overcome. There are doubts that we possess the natural moral equipment enough to build this relationship (Bostrom 2014).

(2) The main difficulty of the problematic relationship between moral intuitions and emerging transformative technologies is to be found in the fact that we have various moral intuitions and at the same time we face a moral vacuum (Moor 2005, Martin 2012). Transformative technologies provoke us to trespass beyond the natural and epistemic limits imposed by human intuition which are the driving forces of moral evaluations.

The sense of fairness and property are evolutionarily prior to the questions of privacy and authenticity. We often express all of them by our moral intuitions. These intuitions are, however, problematic in the context of new and emerging technologies. People have strong intuitions regarding property and ownership. “Information wants to be free” expresses a moral intuition about fair access to information. Patents and copyrights are, however, an institutional enclosed set of intuitions about what property in ideas means (Merges 2011). In a similar fashion, people have a large set of moral intuitions about privacy and its limits, but the concept of privacy is weak and acts as a broken umbrella (Dijk 2010). What kind of intimate practices will we have with a social artificial companion (Turkle 2010)? What are the limits for digitally profiling a person (Shoemaker 2010)? Can we imagine a world with deeply enhanced individuals? Another difficulty for humans in the near future will be the relevance that freedom, self-determination and agency will have in a distributed morality world (Floridi 2013). In this socio-technical (Ropohl 1999), distributed, human-artificial universe of action we should investigate how moral intuitions trigger the chasm between the atomic moral level and the aggregate output.

(3) Neuroscience and cognitive science have transformed the way we think about morality, highlighting the major role that intuitions play in moral decision making (Haidt 2001, 2002, 2012; Greene et al. 2001; Sinnott-Armstrong, Young, Cushman 2010; Reynolds et al. 2010, Sunstein 2005). Recent scientific results from cognitive science, evolutionary psychology and neuroscience, the so-called heuristics and bias research program, show that intuitive judgment can lead us astray (Gilovich, Griffin, Kahneman (eds.) 2002; Myers 2002; Kahneman 2011; Christopher Chabris, Daniel Simons 2010; Greene 2007). Paradigmatic studies point out the unreliability of heuristics in general and suggest that their general unreliability raises serious worries (Sunstein 2005, 2008; Kahneman & Sunstein 2005). Framing effects, biases, and the shift in external conditions of heuristics usage lead to systematic judgment errors. Moreover, moral heuristics and intuitions might well have an evolutionary foundation and a biological basis (de Waal 1996, 2009; Katz 2000; Sober & Wilson 1999; Lieberman, Tooby & Cosmides 2006, Haidt & Joseph 2004). Given evolutionary structural conditions, moral intuitions might generate unsatisfactory moral judgments in radically new conditions (Singer 2005). This raises the need to carefully assess our current moral responses to problems that are specific to our highly technological world. We need to explore when moral intuitions are reliable before throwing them in the practical debate. We cannot simply assume that ethical conceptions and intuitions that functioned well in the past will be useful into this new territory.

 

 

All rights reserved