Menu

Blog

Archive for the ‘existential risks’ category: Page 20

Sep 5, 2023

North Korea stages tactical nuclear attack drill

Posted by in categories: existential risks, military, nuclear weapons

SEOUL, Sept 3 (Reuters) — North Korea conducted a simulated tactical nuclear attack drill that included two long-range cruise missiles in an exercise to “warn enemies” the country would be prepared in case of nuclear war, the KCNA state news agency said on Sunday.

KCNA said the drill was successfully carried out on Saturday and two cruise missiles carrying mock nuclear warheads were fired towards the West Sea of the Korean peninsula and flew 1,500 km (930 miles) at a preset altitude of 150 meters.

Pyongyang also said it would bolster its military deterrence against the United States and South Korea.

Sep 5, 2023

Asteroid the size of 81 bulldogs to pass Earth on Wednesday

Posted by in categories: asteroid/comet impacts, existential risks

Asteroid 2021 JA5 is around the size of 81 bulldogs, the symbol of the college football team of the University of Georgia. But it won’t hit us – hopefully the Bulldog team will have better luck.

Sep 4, 2023

OpenAI’s Moonshot: Solving the AI Alignment Problem

Posted by in categories: existential risks, robotics/AI

Jan Leike explains OpenAI’s effort to protect humanity from superintelligent AI.

Sep 3, 2023

The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI

Posted by in categories: biotech/medical, existential risks, robotics/AI

“It’s a time of huge uncertainty,” says Geoffrey Hinton from the living room of his home in London. “Nobody really knows what’s going to happen … I’m just sounding the alarm.”

In The Godfather in Conversation, the cognitive psychologist and computer scientist ‘known as the Godfather of AI’ explains why, after a lifetime spent developing a type of artificial intelligence known as deep learning, he is suddenly warning about existential threats to humanity.

Continue reading “The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI” »

Aug 26, 2023

This new technology could change AI (and us)

Posted by in categories: existential risks, robotics/AI

Organoid intelligence is an emerging field in computing and artificial intelligence.

Earlier this year, an Australian startup Cortical Labs developed a cybernetic system made from human brain cells. They called it DishBrain and taught it to play Pong.

Continue reading “This new technology could change AI (and us)” »

Aug 26, 2023

Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less

Posted by in categories: business, existential risks, robotics/AI

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them. Learn more, read the summary and find the full transcript on the 80,000 Hours website: https://80000hours.org/podcast/episodes/jan-leike-superalignment.

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Continue reading “Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less” »

Aug 25, 2023

ChatGPT Still Needs Humans

Posted by in categories: employment, existential risks, robotics/AI

The media frenzy surrounding ChatGPT and other large, language model, artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.

But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.

Continue reading “ChatGPT Still Needs Humans” »

Aug 17, 2023

Elon Musk on Neuralink: Solving Brain Diseases & Reducing the Risk of AI

Posted by in categories: biotech/medical, Elon Musk, existential risks, genetics, robotics/AI, singularity

Elon Musk delves into the groundbreaking potential of Neuralink, a revolutionary venture aimed at interfacing with the human brain to tackle an array of brain-related disorders. Musk envisions a future where Neuralink’s advancements lead to the resolution of conditions like autism, schizophrenia, memory loss, and even spinal cord injuries.

Elon Musk discusses the transformative power of Neuralink, highlighting its role in restoring motor control after spinal cord injuries, revitalizing brain function post-stroke, and combating genetically or trauma-induced brain diseases. Musk’s compelling insights reveal how interfacing with neurons at an intricate level can pave the way for repairing and enhancing brain circuits using cutting-edge technology.

Continue reading “Elon Musk on Neuralink: Solving Brain Diseases & Reducing the Risk of AI” »

Aug 15, 2023

🔴 The Fermi Paradox, Cyborgs, And Artificial Intelligence — My Interview With Isaac Arthur

Posted by in categories: cyborgs, existential risks, robotics/AI

In this week’s live stream, I’m going to share clips of my interview with Isaac Arthur, which you can find the full version on the Answers With Joe Podcast: h…

Aug 14, 2023

How to Survive a Nuclear War: Study Reveals the Safest Places to Wait Out the Conflict

Posted by in categories: existential risks, food, military

New research indicates that Australia and New Zealand are the two best places on Earth to survive a nuclear war. The recently published set of calculations don’t focus on blast-related deaths or even deaths caused by radiation fall-out, which most estimates say would number in the hundreds of millions, but instead look at how a nuclear winter caused by nuclear bomb explosions would affect food supplies, potentially leading to the starvation of billions.

Nuclear War Simulations Performed For Decades

Since the first atomic bombs were dropped on the Japanese cities of Hiroshima and Nagasaki in 1945, effectively spelling the end of World War II, war game theorists have looked at a myriad of simulations to determine the potential effects of a full-blown nuclear battle. Many simulations look at the potentially hundreds of millions that would likely die in the initial blasts, while others have tried to model the slower but equally as deadly body count from radiation sickness.

Page 20 of 149First1718192021222324Last