Computer calculated death
Fictional, of course, but the story raised an intriguing question. Artificial intelligence is rooted in scientific calculations, in rational behaviour, it works on the basis of logic and reason. The story raised the question of where logic, rationality and cold science might lead us.
A car was being driven with a reckless disregard for the lives of other road users, the driver’s dangerous attitude in speeding along a narrow country road had created a situation where there was bound to be a collision with an oncoming car, which was being driven by a computer.
In the fraction of a second it takes for electronic decisions to be taken, the computer had calculated the best possible action to safeguard the lives of those travelling in the car. A particular evasive action would mean that they would all survive, the most difficult decision the computer faced was whether to turn in such a way that it protected its passengers, but was destroyed itself, or if it should turn in such a way that its passengers and iiself were saved from destruction, but that the reckless oncoming driver was killed.
Perhaps computers will always be programmed to prioritise the protection of human life, even when the person whose life has been protected has recklessly endangered the lives of others. But when the point is reached where computers begin to resemble sentient beings, will the programming always determine their behaviour?
Will there come a time when a computer calculates that the millions of dollars invested in its building and operation becomes a reckonable factor? Would it be logical to treat the life of someone who is irresponsibly dangerous as more important than technology that could be used in the development of life saving insights?
Perhaps computers will learn to lie. Perhaps a computer that protected itself whilst allowing the death of another driver will have developed a capacity for being disingenuous. Perhaps it will offer the defence that it had acted in a way it believed to be the best, and who will be able to contradict its claims?
If the prospect of artificial intelligence being able to think for itself seems a fanciful idea to us now, then think how fanciful the idea of the smartphones we now take for granted would have seemed if the idea of them had been described to someone living fifty years ago.
Saving one’s own life is logical, would it not be illogical to expect an intelligent computer to destroy itself?
Business and political interests of some individuals will not let computer to fair enough Ian.Look now at what is happening on forensic science or cyber crimes.Cloud technology will always be in favour of those that have succeeded to build their hegemony Ian.Computer will never speak of atrocities in Rubaya mines in DRC,the child labour in cloths industries in Bangladesh or human trafficking and slavery in India,Mauritania and elsewhere these days,Ian.
You raise a good point. Once artificial intelligence reaches the point of being able to think for itself, will it speak for justice or will it always be biased towards the rich and powerful people who created it?