As machines become more advanced, the implications in business and society will be enormous. The unfold of interaction human-machine is hard to predict but one thing is clear: we will have to teach machines ethics. Teaching human ethics to a non-human entity is the challenge I will address.
AI is being able to automate not only repetitive routines, like manual work in factories, but also assuming control of ever more complex tasks and decisions that we though only humans could execute, like driving cars to conversational bots, email automation and agenda scheduling.
As machines enter these areas several ethical questions will inevitably rise, like the concept of intentionality and causality. For instance, faced with a challenging situation how will a driverless car algorithm prioritize between two possible outcomes: harming the car owner or possibly killing a pedestrian?
Ethics is an immense labyrinth of mostly implicit rules that easily become obvious for humans but can be challenging for a machine. Can these ethical values be hard wired into a machine or should they be learnt from socialization, like we do through education to children? How that will be possible if machines lack most basic human features, like sense of self, emotions or a body? In this talk I’ll bring some of the challenges and expose possible solutions.
Nick Bostrom address the issue through an interesting perspective.