Artificially intelligent morality
Writing in the weekend edition of the Financial Times, Richard Waters explored the possibilities raised by the victory of Google’s AlphaZero computer programme in a chess match against the well-established Stockfish 8 programme. The match was notable because AlphaZero appeared to have made a move that was irrational, sacrificing a bishop in return for a pawn. The move did not seem logical, but had the effect of opening up the board allowing AlphaZero to go on to triumph. The programme seemed capable of original thought; it seemed a demonstration of real artificial intelligence.
However successful computer programmes may be at chess, the possibility of them demonstrating intelligence comparable with that of humans is still remote. The possible permutations involved in a game of chess may run into tens of millions, but they are defined by the rules of the game, the programme must operate within strict parameters. The intelligence demanded for daily human life demands countless skills and choices, it demands the answering of unanticipated questions, something beyond the capacity of a computer programme, which can only base its response on the information it has been given.
The greatest difficulty for artificial intelligence will be in responding to moral questions. One of the issues raised in objections to driverless cars is how they make moral choices. An adult pushing a child in a buggy steps off of the pavement without looking: should the car hit the adult and child or should it swerve to the right for a head-on collision with a car coming in the other direction? Someone writing the programme to allow a vehicle to travel autonomously would have to decide what the programme should instruct the car to do, someone has to take the moral decision because the programme is incapable of doing so by itself.
Real artificial intelligence will have emerged when a programme can argue with itself, and with other programmes, about what is right and what is wrong. Perhaps, given the speed of technological progress, that possibility will arrive sooner than expected, but for that point to be reached, programmers will have to teach value systems to their machines, they will have to install a code of ethics as part of the computer’s thought processes.
Who is going to decide on the moral values of artificially intelligent computer programme? Whether it is taking the decision to run over the child in the buggy, or the decision about which lives to save in an accident and emergency ward, someone is going to have to write the moral software. The decisions regarding the moral choices of computers are too serious to be left to technologists alone.
“Real artificial intelligence will have emerged when a programme can argue with itself, and with other programmes, about what is right and what is wrong.”
And maybe one day, we too,will again reach the point when a human can argue with itself, and with other humans, about what it right and what is wrong.
It is said that you cannot give away, what you do not first possess yourself, if they want to give a moral compass to a programme, they better hurry before humankind has none left to give.