Menu

Blog

Archive for the ‘existential risks’ category: Page 102

Apr 12, 2016

Jeff Bezos wants to open the way for millions to live, work in space

Posted by in categories: asteroid/comet impacts, existential risks

COLORADO SPRINGS — Jeff Bezos, founder of Amazon.com, could not be more clear about what he believes is mankind’s future.

“I want millions of people living and working in space. I want us to be a space-faring civilization,” Bezos told a packed audience at the Space Symposium on Tuesday in Colorado Springs.

“My motivation is, I don’t want Plan B to be, ‘Good news, Earth got destroyed by a big comet but we live on Mars.’ I think we need to explore and utilize space in order to save the Earth,” he said, referring to the need to shift industrial manufacturing to space to limit the impact on Earth’s resources.

Continue reading “Jeff Bezos wants to open the way for millions to live, work in space” »

Apr 11, 2016

Why Pessimistic Predictions For Future of AI May be More Hype than High Tech

Posted by in categories: complex systems, cryonics, existential risks, futurism, life extension, robotics/AI, singularity

The growth of human and computer intelligence has triggered a barrage of dire predictions about the rise of super intelligence and the singularity. But some retain their skepticism, including Dr. Michael Shermer, a science historian and founding publisher of Skeptic Magazine.

quote-i-m-a-skeptic-not-because-i-do-not-want-to-believe-but-because-i-want-to-know-michael-shermer-71-29-72

The reason so many rational people put forward hypotheses that are more hype than high tech, Shermer says, is that being smart and educated doesn’t protect anyone from believing in “weird things.” In fact, sometimes smart and educated people are better at rationalizing beliefs that they hold for not-so-rational reasons. The smarter and more educated you are, the better able you are to find evidence to support what you want to be true, suggests Shermer.

“This explains why Nobel Prize winners speak about areas they know nothing about with great confidence and are sure that they’re right. Just because they have this great confidence of being able to do that (is) a reminder that they’re more like lawyers than scientists in trying to marshal a case for their client,” Shermer said. “(Lawyers) just put together the evidence, as much as you can, in support of your client and get rid of the negative evidence. In science you’re not allowed to do that, you’re supposed to look at all the evidence, including the counter evidence to your theory.”

Continue reading “Why Pessimistic Predictions For Future of AI May be More Hype than High Tech” »

Apr 7, 2016

Newly discovered planet could destroy Earth any day now

Posted by in categories: asteroid/comet impacts, existential risks

Look up the definition of irresponsible journalism and you’ll probably find a link to THIS article.


A mysterious planet that wiped out life on Earth millions of years ago could do it again, according to a top space scientist.

And some believe the apocalyptic event could happen as early as this month.

Continue reading “Newly discovered planet could destroy Earth any day now” »

Mar 31, 2016

Could ‘Planet X’ Cause Comet Catastrophes on Earth?

Posted by in categories: asteroid/comet impacts, existential risks

As astronomers track down more clues as to the existence of a large world orbiting the sun in the outer fringes of the solar system, a classic planetary purveyor of doom has been resurrected as a possible trigger behind mass extinctions on Earth.

Yes, I’m talking about “Planet X.” And yes, there’s going to be hype.

MORE: 9th Planet May Lurk in the Outer Solar System.

Continue reading “Could ‘Planet X’ Cause Comet Catastrophes on Earth?” »

Mar 30, 2016

12 Ways Humanity Could Destroy The Entire Solar System

Posted by in categories: existential risks, space

We humans are doing a bang-up job of messing up our home planet. But who’s to say we can’t go on to screw things up elsewhere? Here, not listed in any particular order, are 12 unintentional ways we could do some serious damage to our Solar System, too.

Read more

Mar 29, 2016

Something Just Slammed Into Jupiter

Posted by in categories: asteroid/comet impacts, existential risks

Astronomers have captured video evidence of a collision between Jupiter and a small celestial object, likely a comet or asteroid. Though it looks like a small blip of light, the resulting explosion was unusually powerful.

As Phil Plait of Bad Astronomy reports, the collision occurred on March 17, but confirmation of the event only emerged this week. An amateur Austrian astronomer used a 20-centimeter telescope to chronicle the unexpected event, but it could’ve been some kind of visual artifact.

Continue reading “Something Just Slammed Into Jupiter” »

Mar 29, 2016

Flyby Comet Was WAY Bigger Than Thought

Posted by in categories: asteroid/comet impacts, existential risks

Oh, joy. I hope it doesn’t take an actual catastrophe before the world comes together to get all of our eggs out of this one basket.


Comet P/2016 BA14 was initially thought to be a cosmic lightweight, but as it flew past Earth on March 22, NASA pinged it with radar to reveal just what a heavyweight it really is.

Read more

Mar 18, 2016

Who’s Afraid of Existential Risk? Or, Why It’s Time to Bring the Cold War out of the Cold

Posted by in categories: defense, disruptive technology, economics, existential risks, governance, innovation, military, philosophy, policy, robotics/AI, strategy, theory, transhumanism

At least in public relations terms, transhumanism is a house divided against itself. On the one hand, there are the ingenious efforts of Zoltan Istvan – in the guise of an ongoing US presidential bid — to promote an upbeat image of the movement by focusing on human life extension and other tech-based forms of empowerment that might appeal to ordinary voters. On the other hand, there is transhumanism’s image in the ‘serious’ mainstream media, which is currently dominated by Nick Bostrom’s warnings of a superintelligence-based apocalypse. The smart machines will eat not only our jobs but eat us as well, if we don’t introduce enough security measures.

Of course, as a founder of contemporary transhumanism, Bostrom does not wish to stop artificial intelligence research, and he ultimately believes that we can prevent worst case scenarios if we act now. Thus, we see a growing trade in the management of ‘existential risks’, which focusses on how we might prevent if not predict any such tech-based species-annihilating prospects. Nevertheless, this turn of events has made some observers reasonably wonder whether indeed it might not be better simply to put a halt to artificial intelligence research altogether. As a result, the precautionary principle, previously invoked in the context of environmental and health policy, has been given a new lease on life as generalized world-view.

The idea of ‘existential risk’ capitalizes on the prospect of a very unlikely event that, were it to pass, would be extremely catastrophic for the human condition. Thus, the high value of the outcome psychologically counterbalances its low probability. It’s a bit like Pascal’s wager, whereby the potentially negative consequences of you not believing in God – to wit, eternal damnation — rationally compels you to believe in God, despite your instinctive doubts about the deity’s existence.

However, this line of reasoning underestimates both the weakness and the strength of human intelligence. On the one hand, we’re not so powerful as to create a ‘weapon of mass destruction’, however defined, that could annihilate all of humanity; on the other, we’re not so weak as to be unable to recover from whatever errors of design or judgement that might be committed in the normal advance of science and technology in the human life-world. I make this point not to counsel complacency but to question whether ‘existential risk’ is really the high concept that it is cracked up to be. I don’t believe it is.

Continue reading “Who's Afraid of Existential Risk? Or, Why It's Time to Bring the Cold War out of the Cold” »

Mar 3, 2016

Dr. Sarif, Or How I Learned To Stop Worrying And Love The Human Revolution

Posted by in categories: computing, existential risks, government

I am not in fact talking about the delightful Deus Ex game, but rather about the actual revolution in society and technology we are witnessing today. Pretty much every day I look at any news source, be it on cable news networks or facebook feeds or whathaveyou, I always see fear mongering. “Implantable chips will let the government track you!” or “Hackers will soon be able to steal your thoughts!” (Seriously, seen both of these and much more and much crazier.) …But I’m here to tell you two things. First, calm the hell down. Nearly every doomsday scenario painted by fear-mongering assholes is either impossible or so utterly unlikely as to be effectively impossible. And second… that you should psych the hell up because its actually extremely exciting and worth getting excited about. But for good reasons, not bad.

Read more

Mar 2, 2016

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1

Posted by in categories: existential risks, innovation, robotics/AI

We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species.

Read more