Archive for the ‘existential risks’ category: Page 10

Aug 26, 2023

Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less

Posted by in categories: business, existential risks, robotics/AI

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them. Learn more, read the summary and find the full transcript on the 80,000 Hours website:

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Continue reading “Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less” »

Aug 25, 2023

ChatGPT Still Needs Humans

Posted by in categories: employment, existential risks, robotics/AI

The media frenzy surrounding ChatGPT and other large, language model, artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.

But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.

Continue reading “ChatGPT Still Needs Humans” »

Aug 17, 2023

Elon Musk on Neuralink: Solving Brain Diseases & Reducing the Risk of AI

Posted by in categories: biotech/medical, Elon Musk, existential risks, genetics, robotics/AI, singularity

Elon Musk delves into the groundbreaking potential of Neuralink, a revolutionary venture aimed at interfacing with the human brain to tackle an array of brain-related disorders. Musk envisions a future where Neuralink’s advancements lead to the resolution of conditions like autism, schizophrenia, memory loss, and even spinal cord injuries.

Elon Musk discusses the transformative power of Neuralink, highlighting its role in restoring motor control after spinal cord injuries, revitalizing brain function post-stroke, and combating genetically or trauma-induced brain diseases. Musk’s compelling insights reveal how interfacing with neurons at an intricate level can pave the way for repairing and enhancing brain circuits using cutting-edge technology.

Continue reading “Elon Musk on Neuralink: Solving Brain Diseases & Reducing the Risk of AI” »

Aug 15, 2023

🔴 The Fermi Paradox, Cyborgs, And Artificial Intelligence — My Interview With Isaac Arthur

Posted by in categories: cyborgs, existential risks, robotics/AI

In this week’s live stream, I’m going to share clips of my interview with Isaac Arthur, which you can find the full version on the Answers With Joe Podcast: h…

Aug 14, 2023

How to Survive a Nuclear War: Study Reveals the Safest Places to Wait Out the Conflict

Posted by in categories: existential risks, food, military

New research indicates that Australia and New Zealand are the two best places on Earth to survive a nuclear war. The recently published set of calculations don’t focus on blast-related deaths or even deaths caused by radiation fall-out, which most estimates say would number in the hundreds of millions, but instead look at how a nuclear winter caused by nuclear bomb explosions would affect food supplies, potentially leading to the starvation of billions.

Nuclear War Simulations Performed For Decades

Since the first atomic bombs were dropped on the Japanese cities of Hiroshima and Nagasaki in 1945, effectively spelling the end of World War II, war game theorists have looked at a myriad of simulations to determine the potential effects of a full-blown nuclear battle. Many simulations look at the potentially hundreds of millions that would likely die in the initial blasts, while others have tried to model the slower but equally as deadly body count from radiation sickness.

Aug 13, 2023

Daniel Schmachtenberger: “Artificial Intelligence and The Superorganism” | The Great Simplification

Posted by in categories: existential risks, health, robotics/AI

On this episode, Daniel Schmachtenberger returns to discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries and facing geopolitical risks all with existential consequences. How does artificial intelligence, not only add to these risks, but accelerate the entire dynamic of the metacrisis? What is the role of intelligence vs wisdom on our current global pathway, and can we change course? Does artificial intelligence have a role to play in creating a more stable system or will it be the tipping point that drives our current one out of control?

About Daniel Schmachtenberger:
Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.

Continue reading “Daniel Schmachtenberger: ‘Artificial Intelligence and The Superorganism’ | The Great Simplification” »

Aug 12, 2023

Artificial intelligence could lead to extinction, experts warn

Posted by in categories: biotech/medical, existential risks, robotics/AI

Heads of OpenAI, Google Deepmind and Anthropic say the threat is as great as pandemics and nuclear war.

Aug 9, 2023

LLMs like GPT and Bard can be manipulated and hypnotized

Posted by in categories: existential risks, finance, internet, robotics/AI

Hypnotized LLMs can help leak confidential financial information, generate malicious code and even cross red lights.

Tech pundits worldwide have been fluctuating between marking artificial intelligence as the end of all of humanity and calling it the most significant thing humans have ever touched since the internet.

We are in a phase where we are unsure what the AI Pandora’s box will reveal. Are we heading for doomsday or utopia?

Aug 7, 2023

AI Expert: “I Think We’re All Going to Die”

Posted by in categories: existential risks, life extension, robotics/AI

There’s no shortage of AI doomsday scenarios to go around, so here’s another AI expert who pretty bluntly forecasts that the technology will spell the death of us all, as reported by Bloomberg.

This time, it’s not a so-called godfather of AI sounding the alarm bell — or that other AI godfather (is there a committee that decides these things?) — but a controversial AI theorist and provocateur known as Eliezer Yudkowsky, who has previously called for bombing machine learning data centers. So, pretty in character.

Continue reading “AI Expert: ‘I Think We’re All Going to Die’” »

Jul 31, 2023

New algorithm ensnares its first ‘potentially hazardous’ asteroid

Posted by in categories: asteroid/comet impacts, existential risks, information science

An asteroid discovery algorithm—designed to uncover near-Earth asteroids for the Vera C. Rubin Observatory’s upcoming 10-year survey of the night sky—has identified its first “potentially hazardous” asteroid, a term for space rocks in Earth’s vicinity that scientists like to keep an eye on.

The roughly 600-foot-long asteroid, designated 2022 SF289, was discovered during a test drive of the algorithm with the ATLAS survey in Hawaii. Finding 2022 SF289, which poses no risk to Earth for the foreseeable future, confirms that the next-generation algorithm, known as HelioLinc3D, can identify near-Earth asteroids with fewer and more dispersed observations than required by today’s methods.

Continue reading “New algorithm ensnares its first ‘potentially hazardous’ asteroid” »

Page 10 of 138First7891011121314Last