By SAMI Fellow, John Ormerod
Left to themselves, the ingenuity of people and machines will continue to create ever more advanced technology for manipulating the world around us. However the extent to which this will happen, the proportion of the world’s people who benefit and the degree to which technology is allowed to facilitate a deeper understanding of the fundamental workings of the universe are not given and depend on the ability of human beings and in due course non-human intelligences to act in certain ways which enhance rather than diminish the lives and prospects of those around them. Developments in ethical thinking, political philosophy, institutional design and organisation, trans-national cooperation and decision-making all lag behind the progress of technology but the changes that are taking place now will critically influence the future.
Some of the interesting issues being addressed by organisations from the United Nations to national interests to academic and community groups include:
As technology becomes capable of augmenting the basic human endowment through enhanced cognition, learning capability and longevity, who will benefit and under what criteria? Already the richest 2% of the world’s population own 50% of the world’s wealth, whilst the poorest 50% own 1%. The conduct of politics in such an unequal world remains largely unaddressed.
The point at which robots become sentient, emotional and conscious beings like us. If this happens what are the ethical ground rules? Is the Kantian categorical imperative really universal in that it applies to all beings above a minimal conscious threshold or should robots always be treated as machine slaves working for the greater happiness of the human race?
If human enhancement allows the development of a super race of fitter, brighter, longer living people, how do we treat those who have dropped behind or opted out, a problem which will be cumulative as, like the wealthy, the enhanced race ahead? Alternatively, the problem could be the other way round; how does an essentially egalitarian world deal with a small minority of overdeveloped individuals or societies?
Semi-autonomous battlefield weapons ask a human operator for permission to act; the next generation will be autonomous and robots will decide when, how and who. These robots will need ethical principles to be hard-wired into their systems but where will these principles come from and how are robots to be held accountable?
The ethical thinking that is developed over the next ten years will determine whether the human race jumps into a technological future driven by the advanced nations, to which it then has to pragmatically adapt, or, whether the debate on what is fair and right is allowed to condition the development of global institutions and politics so that humanity evolves into a future where at least to some extent justice and fairness are still valid concepts. We can easily imagine scenarios based around universalistic thinking and around relativistic, perhaps nation based, concepts but no doubt others will emerge as we probe what, if anything, it means to be human and individual as opposed to part of the continuum of matter, essentially one and related to all other parts.