Toggle light / dark theme

Max Tegmark: Can We Prevent AI Superintelligence From Controlling Us?

đŸ”” Try Epoch Times now: https://ept.ms/3Uu1JA5

This is the full version of Jan Jekielek’s interview with Max Tegmark. The interview was originally released on Epoch TV on June 3, 2025.

Few people understand artificial intelligence and machine learning as well as MIT physics professor Max Tegmark. Founder of the Future of Life Institute, he is the author of “Life 3.0: Being Human in the Age of Artificial Intelligence.”

“The painful truth that’s really beginning to sink in is that we’re much closer to figuring out how to build this stuff than we are figuring out how to control it,” he says.

Where is the U.S.–China AI race headed? How close are we to science fiction-type scenarios where an uncontrollable superintelligent AI can wreak major havoc on humanity? Are concerns overblown? How do we prevent such scenarios?

CHAPTER TITLES

ASI Risks: Similar premises, opposite conclusions | Eliezer Yudkowsky vs Mark Miller

A debate/discussion on ASI (artificial superintelligence) between Foresight Senior Fellow Mark S. Miller and MIRI founder Eliezer Yudkowsky. Sharing similar long-term goals, they nevertheless reach opposite conclusions on best strategy.

“What are the best strategies for addressing risks from artificial superintelligence? In this 4-hour conversation, Eliezer Yudkowsky and Mark Miller discuss their cruxes for disagreement. While Eliezer advocates an international treaty that bans anyone from building it, Mark argues that such a pause would make an ASI singleton more likely – which he sees as the greatest danger.”


What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement.

They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.

Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.

Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse
or even World War III.

Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’

He explains:
⬛How AI could release a deadly virus.
⬛Why these 5 jobs might be the only ones left.
⬛How superintelligence will dominate humans.
⬛Why ‘superintelligence’ could trigger a global collapse by 2027
⬛How AI could be worse than nuclear weapons.
⬛Why we’re almost certainly living in a simulation.

00:00 Intro.
02:28 How to Stop AI From Killing Everyone.
04:35 What’s the Probability Something Goes Wrong?
04:57 How Long Have You Been Working on AI Safety?
08:15 What Is AI?
09:54 Prediction for 2027
11:38 What Jobs Will Actually Exist?
14:27 Can AI Really Take All Jobs?
18:49 What Happens When All Jobs Are Taken?
20:32 Is There a Good Argument Against AI Replacing Humans?
22:04 Prediction for 2030
23:58 What Happens by 2045?
25:37 Will We Just Find New Careers and Ways to Live?
28:51 Is Anything More Important Than AI Safety Right Now?
30:07 Can’t We Just Unplug It?
31:32 Do We Just Go With It?
37:20 What Is Most Likely to Cause Human Extinction?
39:45 No One Knows What’s Going On Inside AI
41:30 Ads.
42:32 Thoughts on OpenAI and Sam Altman.
46:24 What Will the World Look Like in 2100?
46:56 What Can Be Done About the AI Doom Narrative?
53:55 Should People Be Protesting?
56:10 Are We Living in a Simulation?
1:01:45 How Certain Are You We’re in a Simulation?
1:07:45 Can We Live Forever?
1:12:20 Bitcoin.
1:14:03 What Should I Do Differently After This Conversation?
1:15:07 Are You Religious?
1:17:11 Do These Conversations Make People Feel Good?
1:20:10 What Do Your Strongest Critics Say?
1:21:36 Closing Statements.
1:22:08 If You Had One Button, What Would You Pick?
1:23:36 Are We Moving Toward Mass Unemployment?
1:24:37 Most Important Characteristics.

Follow Dr Roman:
X — https://bit.ly/41C7f70
Google Scholar — https://bit.ly/4gaGE72

You can purchase Dr Roman’s book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’, here: https://amzn.to/4g4Jpa5

After the Singularity — What Life Would Be Like If A Technological Singularity Happen?

Go to https://hensonshaving.com/isaacarthur and enter “Isaac Arthur ” at checkout to get 100 free blades with your purchase.
What happens after intelligence explodes beyond human comprehension? We explore a world shaped by superintelligence, where humanity may ascend, adapt — or disappear.

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: https://www.patreon.com/IsaacArthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-arthur.
Facebook Group: https://www.facebook.com/groups/1583992725237264/
Reddit: https://www.reddit.com/r/IsaacArthur/
Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.
SFIA Discord Server: https://discord.gg/53GAShE
Credits:
After the Singularity — What Life Would Be Like If A Technological Singularity Happened?
Written, Produced & Narrated by: Isaac Arthur.
Editors: Lukas Konecny.
Select imagery/video supplied by Getty Images.
Music Courtesy of Epidemic Sound http://epidemicsound.com/creator.

Chapters.
0:00 Intro.
3:36 Is the Singularity Inevitable? The Case for Limits and Roadblocks.
8:42 Scenarios After the Singularity.
9:15 Scenario One: The AI Utopia.
10:31 Scenario Two: Digital Heaven.
11:57 Scenario Three: The AI Wasteland.
13:10 Scenario Four: The Hybrid Civilization.
14:48 What Does the Singularity Mean for Us?
16:31 Humanity’s Response: Resistance, Adaptation, or Surrender.
20:22 PRecision.
21:45 The Limits of Superintelligence: Why Even Godlike Minds Might Struggle.
25:48 Humanity’s Role in a Post-Singularity Future.
29:06 The Fermi Paradox and the Silent Singularity.
31:10 Reflections in Pop Culture and History.
32:27 Writing the Future.

What came before the Big Bang? Supercomputers may hold the answer

Scientists are rethinking the universe’s deepest mysteries using numerical relativity, complex computer simulations of Einstein’s equations in extreme conditions. This method could help explore what happened before the Big Bang, test theories of cosmic inflation, investigate multiverse collisions, and even model cyclic universes that endlessly bounce through creation and destruction.

Astronomers probe the nature of a massive young stellar object

Astronomers from Argentina and Spain have performed near-infrared observations of a massive young stellar object known as MYSO G29.862−0.0044. The observational campaign sheds more light on the nature of this object and its unique morphology. The new findings are presented in a paper published August 13 on the arXiv preprint server.

Massive young stellar objects (MYSOs) are stars in the very early stage of formation and the progenitors of massive main-sequence stars. However, due to their short formation timescale (about 10,000–100,000 years) and the severe extinction by the surrounding gas and dust, observations of MYSOs remain challenging.

Located some 20,200 away, MYSO G29.862−0.0044 (YSO-G29 for short), is a massive young stellar object associated with the star-forming region G29.96–0.02. The object is likely embedded within a dense molecular core.

Scientists use Stephen Hawking theory to propose ‘black hole morsels’ — strange, compact objects that could reveal new physics

Violent black hole collisions may create black hole ‘morsels’ no larger than an asteroid — and these bizarre objects could pave the way to unlocking new physics, a study claims.

The Fermi Paradox & The Hivemind Dilemma

Are we alone, or just looking for the wrong kind of aliens? Discover how the path to hive minds and distributed consciousness might answer the Fermi Paradox — and pose new dilemmas of their own.

Watch my exclusive video Dark Biospheres: https://nebula.tv/videos/isaacarthur–

Get Nebula using my link for 40% off an annual subscription: https://go.nebula.tv/isaacarthur.
Get a Lifetime Membership to Nebula for only $300: https://go.nebula.tv/lifetime?ref=isa

Use the link https://gift.nebula.tv/isaacarthur to give a year of Nebula to a friend for just $36.

Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: / isaacarthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-a

Facebook Group: / 1583992725237264
Reddit: / isaacarthur.
Twitter: / isaac_a_arthur on Twitter and RT our future content.
SFIA Discord Server: / discord.
Credits:
The Fermi Paradox & The Hivemind Dilemma.
Written, Produced & Narrated by: Isaac Arthur.
Editor: Lukas Konecny.
Select imagery/video supplied by Getty Images.
Music Courtesy of Epidemic Sound http://epidemicsound.com/creator.

Chapters.
0:00 Intro.
1:25 What is a Hivemind?
3:48 Why Build a Hivemind?
9:51 The Hivemind Dilemma: Cognitive Horizon Limits.
14:56 FTL and the Limits of Superminds.
18:33 Asimov, Seldon, Gaia, Galaxia, and the Fallacy of Galactic Planning.
24:46 Galactic Civilizations & Fragmented Minds.
26:56 The Competition of Minds.

/* */