Elon Musk claims a robot uprising could be a serious threat to humanity


comments

Elon Musk is one of the driving forces behind super-intelligent computers that could improve everything from space travel to electric cars.

But the Tesla-founder claims the technology could someday be more harmful than nuclear weapons.

At the weekend, the billionaire tweeted a recommendation for a book that looks at a robot uprising, claiming 'We need to be super careful with AI. Potentially more dangerous than nukes.'

Over the weekend, the billionaire tweeted a recommendation for a book that looks at such scenarios, claiming 'We need to be super careful with AI. Potentially more dangerous than nukes'

Over the weekend, the billionaire tweeted a recommendation for a book that looks at such scenarios, claiming 'We need to be super careful with AI. Potentially more dangerous than nukes'

Musk referred to the book 'Superintelligence: Paths, Dangers, Strategies', a work by Nick Bostrom that asks major questions about how humanity will cope with super-intelligent computers.

Mr Bostrom has also argued that the world is fake and we are living in a computer simulation.

 

In a later tweet, Musk wrote: 'Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.'

Musk's tweets follows a similar comment in June in which the Tesla-founder said believes that a horrific 'Terminator-like' scenario could be created from research into artificial intelligence.

The 42-year-old is so worried, he is investing in AI companies, not to make money, but to keep an eye on the technology in case it gets out of hand.

In his tweet, Elon Musk (right) referred to the book 'Superintelligence: Paths, Dangers, Strategies' (left), a work by Nick Bostrom that asks major questions about how humanity will cope with super-intelligent computers
In his tweet, Elon Musk (right) referred to the book 'Superintelligence: Paths, Dangers, Strategies' (left), a work by Nick Bostrom that asks major questions about how humanity will cope with super-intelligent computers

In his tweet, Elon Musk (right) referred to the book 'Superintelligence: Paths, Dangers, Strategies' (left), a work by Nick Bostrom that asks major questions about how humanity will cope with super-intelligent computers

His tweet (pictured) follows a similar comment in June in which the Tesla-founder said believes that a horrific 'Terminator-like' scenario could be created from research into artificial intelligence

His tweet (pictured) follows a similar comment in June in which the Tesla-founder said believes that a horrific 'Terminator-like' scenario could be created from research into artificial intelligence

In March, Musk made an investment San Francisco-based AI group Vicarious, along with Mark Zuckerberg and actor Ashton Kutcher.

Vicarious' ultimate aim is to build a 'computer that thinks like a person…except it doesn't have to eat or sleep', according to the company's co-founder Scott Phoenix.

In an interview with CNBC, Musk said: 'I think there is potentially a dangerous outcome there.'

'There have been movies about this, you know, like Terminator,' Musk continued. 'There are some scary outcomes. And we should try to make sure the outcomes are good, not bad.'

42-year-old Elon Musk (pictured) is so worried, he is investing in AI companies, not to make money, but to keep an eye on the technology in case it gets out of hand

42-year-old Elon Musk (pictured) is so worried, he is investing in AI companies, not to make money, but to keep an eye on the technology in case it gets out of hand

In an interview with CNBC earlier this year, Musk said: 'I think there is potentially a dangerous outcome there. There have been movies about this, you know, like Terminator'. Pictured is a scene from Terminator 2

In an interview with CNBC earlier this year, Musk said: 'I think there is potentially a dangerous outcome there. There have been movies about this, you know, like Terminator'. Pictured is a scene from Terminator 2

GOOGLE SETS UP AI ETHICS BOARD TO CURB THE RISE OF THE ROBOTS

Google has set up an ethics board to oversee its work in artificial intelligence.

The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.

One of its founders warned artificial intelligence is  'number 1 risk for this century,' and believes it could play a part in human extinction

'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.

Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the 'number 1 risk for this century.'

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis, 37, founded DeepMind two years ago with the aim of trying to help computers think like humans.

Vicarious is currently attempting to build a program that mimics the brain's neocortex. 

The neocortex is the top layer of the cerebral hemispheres in the brain of mammals. It is around 3mm thick and has six layers, each involved with various functions.

These include sensory perception, spatial reasoning, conscious thought, and language in humans.

According to the company's website: 'Vicarious is developing machine learning software based on the computational principles of the human brain.

'Our first technology is a visual perception system that interprets the contents of photographs and videos in a manner similar to humans.

'Powering this technology is a new computational paradigm we call the Recursive Cortical Network.'

In October 2013, the company announced it had developed an algorithm that 'reliably' solves modern Captchas - the world's most widely used test of a machine's ability to act human.

Captchas are used when filling in forms, for example, to make sure it's not being completed by a bot. This prevents people programming computers to buy a bulk load of gig tickets, for example.

In March, Elon Musk made an investment San Francisco-based AI group Vicarious, along with Facebook founder Mark Zuckerberg (right) and actor Ashton Kutcher (left)
In March, Elon Musk made an investment San Francisco-based AI group Vicarious, along with Facebook founder Mark Zuckerberg (right) and actor Ashton Kutcher (left)

In March, Elon Musk made an investment in San Francisco-based AI group Vicarious, along with Facebook founder Mark Zuckerberg (right) and actor Ashton Kutcher (left)

As well as Vicarious, Musk was early investor in AI firm DeepMind, which was earlier this year acquired by Google for £400m ($678m).

Professor Stephen Hawking has also warned that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment.

Earlier this year, the renowned physicist discusses Jonny Depp's latest film Transcendence, which delves into a world where computers can surpass the abilities of humans.

Professor Hawking said dismissing the film as science fiction could be the 'worst mistake in history'.

Stephen Hawking has warned that artificial intelligence has the potential to be the downfall of mankind. 'Success in creating AI would be the biggest event in human history,' he said writing in the Independent. 'Unfortunately, it might also be the last'

Stephen Hawking has warned that artificial intelligence has the potential to be the downfall of mankind. 'Success in creating AI would be the biggest event in human history,' he said writing in the Independent. 'Unfortunately, it might also be the last'


 



IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

0 comments:

Post a Comment