This talk aims to provide a possible explanation why most people seem to care very little about the unethicality of much of today’s technologies. It outlines what science and philosophy tell us about the biological and cultural evolutionary origins of (human) morality and ethics, introduces recent research in moral cognition and the importance of moral intuitions in human decision making, and discusses how these things relate to contemporary issues such as A(G)I, self-driving cars, sex-robots, “surveillance capitalism”, the Snowden revelations and many more. Suggesting an “intuition void effect” leading standard users to remain largely oblivious to the moral dimensions of many technologies, it identifies technologists as “learned moral experts”, and emphasizes their responsibility to assume an active role in safeguarding the ethicality of today’s and future technologies.
Why is it that in a technological present full of unethical practices – from the “attention economy” to “surveillance capitalism”, “planned obsolescence”, DRM, and so on and so forth – so many appear to care so little?
To attempt to answer this question, the presentation begins its argument with an introduction into our contemporary understanding about the origins of (human) morality / ethics. From computational approaches a la Axelrod’s Tit for Tat, Frans De Waal’s cucumber-throwing monkeys and Steven Pinker’s “Better Angles of our Nature”, to contemporary moral psychology and moral cognition and these fields’ work on moral intuitions.
As research in the last couple of decades in these fields suggest, it appears that much, if not most of (human) moral / ethical decision making is based on moral intuitions rather than careful, rational reasoning. Joshua Greene likens this to the difference between the “point-and-shoot” mode and the manual mode of a digital camera. Jonathan Haidt uses a metaphorical elephant (moral intuition) and his rider (conscious deliberation) to emphasize the difference in weight. These intuitions are the result of both biological and cultural evolution – the former carrying most of the weight.
The problem with this basis for our moral decision making is, as this presentation will argue, that
we have not (yet) had the time to evolve (both culturally and biologically), “appropriate” moral intuitions towards the technologies that surround us everyday, resulting in an “moral intuition void” effect. And without initial moral intuitions in the face of a technological artifact, neither sentiment nor reason may be activated to pass judgment on its ethicality.
This perspective allows for some interesting conclusions. Firstly, technologists (i.e. hackers, engineers, programmers etc.) for one, who exhibit strong moral intuitions toward certain artifacts have to be understood as “learned moral experts”, whose ability to intuitively grasp the ethical dimensions of a certain technology is not shared by the majority of users.
Secondly, users cannot be expected to possess an innate sense of “right and wrong” with regards to technologies. Thirdly, entities (such as for-profit corporations) need to be called out for making deliberate use of the “moral intuition void” effect.
All in all, this presentation aims to provide a tool for thinking that may be put to use in various cases and discussions. It formulates the ethical imperative for technologists to act upon their expertise-enabled moral intuitions, and calls for an active “memetic engineering process” to “intelligently design” appropriate, culturally learned societal intuitions and responses for our technological present and future.