Stephen Hawking on Aliens & Artificial Intelligence –“Treating AI as Science Fiction Would Potentially Be Our Worst Mistake Ever”

S-Hawking-warns-jun-16-1 Alien Life Astronomy Extraterrestrial Life Physics Science Science News technology

 

“We should plan ahead,” warned physicist Stephen Hawking who died last March, 2018, and was buried next to Isaac Newton. “If a superior alien civilization sent us a text message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here, we’ll leave the lights on’? Probably not, but this is more or less what has happened with AI.”

The memorial stone placed on top of Hawking’s grave included his most famous equation describing the entropy of a black hole. “Here Lies What Was Mortal Of Stephen Hawking,” read the words on the stone, which included an image of a black hole.

Latest news:  Bizarre Particles Flying Out of Antarctica's Ice

“I regard the brain as a computer,” observed Hawking, “which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark.”

But before Hawking left our planet, had expressed serious concerns about the future of mankind. Foremost was his concern for the future of our species and what might prove to be our greatest, and last, invention: Artificial Intelligence reported by The Sunday Times of London.

Here is Hawking in his own words in Stephen Hawking on Aliens, AI & The Universe

Latest news:  When Do Black Holes Become Unstable?

“While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans,” observed Stephen Hawking. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours.”

In short, Hawking concluded, “the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Latest news:  We're About to Find Out if Anti-Gravity Is Real

The Daily Galaxy via The Times of London 

Image credit Top of Page: With thanks to Church & State 

Close Menu