Follow TIS on Twitter: @Truth_is_Scary & Like TIS of Facebook- facebook.com/TruthisScary
According to physicist Stephen Hawking, humanity likely only has about 1,000 years left on Earth. He also warns that the only thing that could save us from certain extinction is creating colonies in other parts of the Solar System.
[W]e must . . . continue to go into space for the future of humanity,” Hawking explained. “I don’t think we will survive another 1,000 years without escaping beyond our fragile planet.”
Hawking’s concerns over humanity’s lifespan have led him to discuss artificial intelligence as well, having said AI will either be “the best, or the worst, thing ever to happen to humanity.”
Meanwhile, billionaire entrepreneur Elon Musk has announced his hope to establish a human colony on Mars in the next few decades through his aerospace firm SpaceX. “I don’t have a doomsday prophecy,” Musk said, “but history suggests some doomsday event will happen.”
But Hawking has estimated that self-sustaining human colonies on Mars won’t be a practical option for at least another 100 years, and emphasizes our need to be extremely careful in the coming decades.
“Although the chance of disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next 1,000 or 10,000 years,” Hawking noted. “By that time, we should have spread out into space and to other stars, so a disaster on Earth would not mean the end of the human race.”
Putting aside the severe effects of climate change, global pandemics resulting from antibiotic resistance, and the progression of warring nations’ nuclear capabilities, we may soon be confronted with the types of enemies we have no knowledge of dealing with.
Last year Hawking was a part of a coalition that included Elon Musk and more than 20,000 researchers and experts who called for a ban on the development of autonomous weapons capable of firing on targets without human intervention.
Musk’s new research initiative, dedicated to the ethics of AI, called today’s robots completely submissive, but there’s still the concern of what happens when we remove too many of their limitations. “AI systems today have impressive but narrow capabilities,” the founders explained.
“It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”