Ethical droids programmed to save 'humans' end up KILLING more than half of them
comments
If you're nervous about sitting in a self-driving car, then you may have good reason.
A recent experiment by UK researchers attempted to find out whether robots would be able to save someone in a life or death situation – and the results were surprising.
Scientists were shocked to find that, far from acting logically, an 'ethical robot' would often be unable to act at all with fatal results.
Scroll down for video
A recent experiment attempted to find out whether robots would be able to save someone in a life or death situation. More often than not, the robot (A) was unable to save two robots (which represented humans, D) from falling into a hole. However, when one human needed saving, the robot was successful 100 per cent of the time
Engineer Alan Winfield from UWE Bristol came up with the idea of an ethical trap in which a robot had to prevent other robots from falling into a hole.
The 'Ethical Robot', however, managed to save one out of two of the humans only half the time despite having ample time to save one of them every time.
'We introduced a third robot - acting as a second proxy human. So now our ethical robot would face a dilemma - which one should it rescue?' Professor Winfield told Rob Waugh at Yahoo News.
Surprisingly the robot managed to save the two other robots three times out of 33, the researchers reported, despite only having time to comfortably rescue one.
'It was a bit unexpected,' Professor Winfield says. 'There was clearly time to save at least one robot, but it just left them half the time.' He described the robot as an 'ethical zombie' that had no choice to behave how it does
'The problem is that the Asimov robot sometimes dithers,' Professor Winfield explained.
For instance, it may see one human robot, and begin moving toward it, but when it notices the other robot, it will change its mind.
'It was a bit unexpected,' Professor Winfield says. 'There was clearly time to save at least one robot, but it just left them half the time.'
He described the robot as an 'ethical zombie' that had no choice to behave how it does.
The results could have implications for self-driving car, which may someday need to choose between saving its passengers or saving other motorists.
Professor Winfield once thought robots could never be ethical at all, but he now says he isn't so sure.
Put the internet to work for you.
0 comments:
Post a Comment