'Concerns over artificial intelligence are misguided': Google CEO Eric Schmidt dismisses fears that robots will spell the end of humanity
comments
Stephen Hawking recently said that the development of full artificial intelligence could spell the end of the human race.
Now, Google chief executive, Eric Schmidt has weighed into the discussion, arguing that there is no need to fear AI, and it could even be the making of humanity.
'These concerns are normal,' he said onstage during the Financial Times Innovate America event in New York this week. 'They're also to some degree misguided.'
Google chief executive, Eric Schmidt, says there is no need to fear AI, and it could be the making of humanity
The Google chief, who is involved in the development of AI in applications such as self-driving cars, also says that the fear of robots stealing human jobs is unwarranted.
'There's lots of evidence that when computers show up, wages go up,' he said, according to a report by Issie Lapowsky in Wired.
'There's lots of evidence that people who work with computers are paid more than people without.'
He argues that machines are far more simplistic that people believe. He used the example of an experiment Google conducted a few years ago on a computer 'neural network'.
The Google chief, who is involved in the development of AI in applications such as self-driving cars (pictured), also says that the fear of robots stealing human jobs is unwarranted
Eric Schmidt's comments (right) following a warning by Professor Stephen Hawking (left) that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment
During the test, company's scientists used and artificial neural network and inputted 11,000 hours of YouTube videos to see what it could learn, without any training.
'It discovered the concept of "cat",' Schmidt said. 'I'm not quite sure what to say about that, except that that's where we are.'
With Google at the forefront of AI development, Eric Schmidt has a lot to gain from public acceptance of the technology.
Google's DeepMind start-up, which was bought for £255 million ($400 million) earlier this year, is currently attempting to mimic the properties of the human brain's short-term working memory.
By combining the way ordinary computers work with the way the human brain works, the artificial intelligence researchers hope the machine will learn to program itself.
Described as a 'Neural Turing Machine', it learns as it stores memories, and later retrieve them to perform logical tasks beyond those it has been trained to do.
The acquisition of DeepMind followed Google's recent purchase of seven robotics firms, including Meka, which makes humanoid robots, and Industrial Perception, which specialises in machines that can package goods, for example.
In August, Google also revealed it had teamed up with two of Oxford University's artificial intelligence teams to help machines better understand users.
'It is a really exciting time for AI research these days, and progress is being made on many fronts including image recognition and natural language understanding,' wrote Demis Hassabis, co-founder of DeepMind and vice president of engineering at Google in a blog post.
But despite these projects, and Schmidt's comments, Google is also aware of the dangers involved with AI and machine learning.
So much so that in January it set up an ethics board to oversee its work in these fields.
In fact, one of the original founders of Google's DeepMind warned artificial intelligence is the 'number 1 risk for this century,' and believes it could play a part in human extinction.
'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in an interview earlier this year.
The ethics board, revealed by web site The Information, is to ensure the projects are not abused.
Earlier this year, Elon Musk likened artificial intelligence to 'summoning the demon'.
The Tesla and Space X founder previously warned that the technology could someday be more harmful than nuclear weapons.
Earlier this year, Elon Musk similarly likened artificial intelligence to 'summoning the demon'. The Tesla and founder previously warned that the technology could someday be more harmful than nuclear weapons
Google's DeepMind start-up, which was bought for £255 million ($400 million) earlier this year, is currently attempting to mimic the properties of the human brain's short-term working memory
Put the internet to work for you.
0 comments:
Post a Comment