Toggle light / dark theme

“It’s a time of huge uncertainty,” says Geoffrey Hinton from the living room of his home in London. “Nobody really knows what’s going to happen … I’m just sounding the alarm.”

In The Godfather in Conversation, the cognitive psychologist and computer scientist ‘known as the Godfather of AI’ explains why, after a lifetime spent developing a type of artificial intelligence known as deep learning, he is suddenly warning about existential threats to humanity.

A University of Toronto University Professor Emeritus, Hinton explains how neural nets work, the role he and others played in developing them and why the kind of digital intelligence that powers ChatGPT and Google’s PaLM may hold an unexpected advantage over our own. And he lays out his concerns about how the world could lose control of a technology that, paradoxically, also promises to unleash huge benefits – from treating diseases to combatting climate change.

Organoid intelligence is an emerging field in computing and artificial intelligence.

Earlier this year, an Australian startup Cortical Labs developed a cybernetic system made from human brain cells. They called it DishBrain and taught it to play Pong.

The roots of this exciting technology go back 60 years. But while it’s still novel today, it’s already superior to conventional deep-learning AI in several aspects. However, it also poses new ethical and existential risks to humanity.

Watch the video to explore how Organoid Intelligence might evolve over the next few decades and how it could fit into our lives.

===
You can find me on TikTok for short and punchy stories about emerging tech: https://www.tiktok.com/@_futureflux.

===

The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them. Learn more, read the summary and find the full transcript on the 80,000 Hours website: https://80000hours.org/podcast/episodes/jan-leike-superalignment.

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.

Today’s guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, “…the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. … Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem — it’s also hiring dozens of scientists and engineers to build out the Superalignment team.

Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains:

Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on… and I think it’s pretty likely going to work, actually. And that’s really, really wild, and it’s really exciting. It’s like we have this hard problem that we’ve been talking about for years and years and years, and now we have a real shot at actually solving it. And that’d be so good if we did.

The media frenzy surrounding ChatGPT and other large, language model, artificial intelligence systems spans a range of themes, from the prosaic – large language models could replace conventional web search – to the concerning – AI will eliminate many jobs – and the overwrought – AI poses an extinction-level threat to humanity. All of these themes have a common denominator: large language models herald artificial intelligence that will supersede humanity.

But large language models, for all their complexity, are actually really dumb. And despite the name “artificial intelligence,” they’re completely dependent on human knowledge and labor. They can’t reliably generate new knowledge, of course, but there’s more to it than that.

ChatGPT can’t learn, improve or even stay up to date without humans giving it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering its hardware. To understand why, you first have to understand how ChatGPT and similar models work, and the role humans play in making them work.

Elon Musk delves into the groundbreaking potential of Neuralink, a revolutionary venture aimed at interfacing with the human brain to tackle an array of brain-related disorders. Musk envisions a future where Neuralink’s advancements lead to the resolution of conditions like autism, schizophrenia, memory loss, and even spinal cord injuries.

Elon Musk discusses the transformative power of Neuralink, highlighting its role in restoring motor control after spinal cord injuries, revitalizing brain function post-stroke, and combating genetically or trauma-induced brain diseases. Musk’s compelling insights reveal how interfacing with neurons at an intricate level can pave the way for repairing and enhancing brain circuits using cutting-edge technology.

Discover the three-layer framework Musk envisions: the primary layer akin to the limbic system, the more intelligent cortex as the secondary layer, and the potential tertiary layer where digital superintelligence might exist. Musk’s thought-provoking perspective raises optimism about the coexistence of a digital superintelligence with the human brain, fostering a harmonious relationship between these layers of consciousness.

Elon Musk emphasises the urgency of Neuralink’s mission, stressing the importance of developing a human brain interface before the advent of digital superintelligence and the elusive singularity. By doing so, he believes we can mitigate existential risks and ensure a stable future for humanity and consciousness as we navigate the uncharted territories of technological evolution.

For more insights, visit EM360tech.com:
https://em360tech.com/tech-news.

#AI #superintelligence #machinelearning.

New research indicates that Australia and New Zealand are the two best places on Earth to survive a nuclear war. The recently published set of calculations don’t focus on blast-related deaths or even deaths caused by radiation fall-out, which most estimates say would number in the hundreds of millions, but instead look at how a nuclear winter caused by nuclear bomb explosions would affect food supplies, potentially leading to the starvation of billions.

Nuclear War Simulations Performed For Decades

Since the first atomic bombs were dropped on the Japanese cities of Hiroshima and Nagasaki in 1945, effectively spelling the end of World War II, war game theorists have looked at a myriad of simulations to determine the potential effects of a full-blown nuclear battle. Many simulations look at the potentially hundreds of millions that would likely die in the initial blasts, while others have tried to model the slower but equally as deadly body count from radiation sickness.

On this episode, Daniel Schmachtenberger returns to discuss a surprisingly overlooked risk to our global systems and planetary stability: artificial intelligence. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards (and beyond) numerous planetary boundaries and facing geopolitical risks all with existential consequences. How does artificial intelligence, not only add to these risks, but accelerate the entire dynamic of the metacrisis? What is the role of intelligence vs wisdom on our current global pathway, and can we change course? Does artificial intelligence have a role to play in creating a more stable system or will it be the tipping point that drives our current one out of control?

About Daniel Schmachtenberger:
Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.

The throughline of his interests has to do with ways of improving the health and development of individuals and society, with a virtuous relationship between the two as a goal.

Towards these ends, he’s had particular interest in the topics of catastrophic and existential risk, civilization and institutional decay and collapse as well as progress, collective action problems, social organization theories, and the relevant domains in philosophy and science.

For Show Notes and.

Daniel’s recommended content for further AI learning:

Hypnotized LLMs can help leak confidential financial information, generate malicious code and even cross red lights.

Tech pundits worldwide have been fluctuating between marking artificial intelligence as the end of all of humanity and calling it the most significant thing humans have ever touched since the internet.

We are in a phase where we are unsure what the AI Pandora’s box will reveal. Are we heading for doomsday or utopia?