Toggle light / dark theme

Joscha Bach & Anders Sandberg

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford

Kind regards.
Adam Ford.
Science, Technology & the Future — #SciFuture — http://scifuture.org

Read more

Hume on suicide

Anyone interested in the morality of suicide reads David Hume’s essay on the subject even today. There are numerous reasons for this, but the central one is that it sets up the starting point for contemporary debate about the morality of suicide, namely, the debate about whether some condition of life could present one with a morally acceptable reason for autonomously deciding to end one’s life. We shall only be able to have this debate if we think that at least some acts of suicide can be moral, and we shall only be able to think this if we give up the blanket condemnation of suicide that theology has put in place. I look at this strategy of argument in the context of the wider eighteenth-century attempt to develop a non-theologically based ethic. The result in Hume’s case is a very modern tract on suicide, with voluntariness and autonomy to the fore and with reflection on the condition of one’s life and one’s desire to carry on living a life in that condition the motivating circumstance.

PubMed Disclaimer

What Can 50-Year-Old Chatbots Teach Us About Clinical Applications of AI?

Can a large language model (LLM) provide insights on the history of chatbots and their clinical applications? 🤖

In this episode of JAMA+ AI Conversations, JAMA+ AI Editor in Chief Roy Perlis, MD, MSc, interviews OpenAI’s ChatGPT (GPT-4o, voice mode) about the development and legacy of the first clinical chatbots, ELIZA and PARRY.

The discussion explores differing perspectives of their creators, as well as how foundational debates about technology and ethics continue to inform the present landscape of AI in mental health care.

🎧 Listen now.


JAMA+ AI Editor in Chief Roy Perlis, MD, MSc, conducted an interview with ChatGPT about the history of chatbots and their clinical applications, for JAMA+ AI Conversations.

Technology is NOT Enough!

Fifteen years ago, I wrote something that annoyed many techno-optimists.

Ten years ago, I filmed it as a podcast.

Today it feels less controversial — and more urgent.

Technology is NOT Enough.

We have the science to feed everyone. We have the tech to provide clean water. We understand climate change. We know how to reduce suffering.

And yet we don’t act.

People with synesthesia experience distinct thematic patterns in their dreams

From the article:

The thematic analysis revealed that synesthete dreams systematically differed from control dreams in four distinct categories. People with synesthesia were more likely to describe dreams involving digital life. This theme included references to scrolling, screens, computer accounts, and routine technology use.

Synesthetes also reported more dreams centered on interpersonal regret. This theme featured scenarios involving guilt, moral conflict, missed opportunities, and urgent apologies. The scientists note that this aligns with the heightened emotional reactivity and memory retention frequently observed in people with synesthesia.

The third prevalent theme in synesthete dreams was diverse worlds. This category included shifting environments, cultural settings, and complex or dystopian landscapes. Because synesthetes tend to score high in openness to experience, they may possess a more flexible cognitive style that supports the construction of richly detailed and varied dream settings.

Finally, the violent conflict theme appeared more often in the dreams of synesthetes. This theme involved fictional threats, horror imagery, and words associated with intense physical clashes. The researchers suggest that individuals with enhanced memory abilities, a common trait in synesthesia, might be more likely to incorporate intense waking experiences into their dreams.


Do waking perceptual traits influence our sleep? New research indicates that people with synesthesia have unique dream patterns, providing evidence that our individual brain structures actively shape our imagination long after we fall asleep.

How space settlement can challenge consumerism

Apparently public interest in extraterrestrial settlement is steadily increasing.


It is impossible to think that anyone involved in thinking about the future of humanity in space can fail to be alarmed by the extent of overemphasis on technical requirements versus the lack of consideration of other key issues.

Space settlement should be developed by following or avoiding certain sets of ideas, doctrines, and philosophical guidelines. In other words, space settlement is in need of an ideology in order to be put in practice. The qualifications of such an ideology can enable us to foresee what a human society would look like, what its social structure and moral values would be, and ultimately whether or not they could survive.

This article is devoted to casting light on how the predominant ideology of consumerism will be challenged by human colonies in space, and in which ways extraterrestrial human culture might affect or reshape our way of thinking here on Earth.

In defense of artificial suffering

Perhaps our last line of defense.


Philosophical Studies — The ability to suffer, in the case of artificial entities, is often viewed as a moral turning point—once detected, there is no going back, and the moral landscape is irreversibly altered. The presence of entities capable of suffering imposes moral and legal obligations on humans. It is therefore unsurprising that many have urged caution in pursuing artificial suffering, with some even proposing a moratorium. In this paper, however, I argue that the emergence of artificial suffering need not entail moral disaster. On the contrary, I defend its development and contend that it may be a necessary feature of superintelligent robots. I suggest that artificial suffering could be essential for enabling human-like ethics in machines, bridging the retribution gap, and functioning as a control mechanism to mitigate existential risks. Rather than constraining research in this area, I maintain that work on artificial suffering should be actively intensified.

Brave New Biology: Intelligence Trumps DNA — with Dr. Michael Levin and Dr. John Vervaeke

Dr. Michael Levin is a professor in the Department of Biology at Tufts University and an associate faculty member at the Wyss Institute at Harvard. He directs the Allen Discovery Center at Tufts, where his team integrates biophysics, computational modeling, and behavioral science to study how cellular collectives make decisions during embryogenesis, regeneration, and cancer.

Levin’s research centers on diverse forms of intelligence and unconventional embodied minds, bridging conceptual theory, experimental biology, and translational work aimed at regenerative medicine. His lab also pioneers efforts in artificial intelligence and the bioengineering of novel living machines.

Read more about Dr. Michael Levin’s work: https://drmichaellevin.org/
X: https://twitter.com/drmichaellevin.
YouTube: ‪@drmichaellevin

John Vervaeke’s YouTube channel: ‪@johnvervaeke

📖 Let’s take our stories back. Check out our latest book in the Tales for Now and Ever series, Rapunzel and the Evil Witch: https://rapunzelbook.com/

Join Fr. Stephen De Young in his Jubilees and the Nephilim course, now streaming live on The Symbolic World: https://www.thesymbolicworld.com/cour… 00:00 — Coming up 01:14 — Intro music 01:40 — Introduction 02:23 — What Michael does 06:19 — Example experiments 07:51 — Memories outside the brain 12:46 — Terminology: memory 13:59 — Communicate to biological cells 15:54 — Limitations? 17:39 — Platonic patterns 34:06 — Incarnation and constraints 39:26 — Causes 49:28 — New beings in new spaces 52:25 — What the Enlightenment dismissed 55:32 — Molecular medicine 57:36 — Subtle bodies 01:00:45 — Ethics 01:03:37 — Medical and meaning applications 01:11:42 — Frightening 01:14:31 — Against the status quo 01:19:03 — Should we dabble in this technology? 💻 Website and blog: http://www.thesymbolicworld.com 🔗 Linktree: https://linktr.ee/jonathanpageau 🔒 BECOME A PATRON: https://thesymbolicworld.com/subscribe Our website designers: https://www.resonancehq.io/ My intro was arranged and recorded by Matthew Wilkinson: https://matthewwilkinson.net/

Beyond the Brain: Michael Levin on Living Intelligence & Minds in the era of AI

A conversation co-published by AI House Davos and Michael Levin’s Academic Content (@drmichaellevin)

In this conversation, we explore how intelligence exists across all scales of life, from cells to collectives, and what this means for our understanding of AI, minds, and what it means to be human.

Professor Michael Levin challenges the assumption that intelligence begins with brains, revealing how biological systems improvise, adapt, and solve problems in ways that go far beyond what our computational architectures attempt. From cognitive glue to the ethics of diverse intelligence, this interview questions the categories we’ve inherited and asks what truly matters as we enter an era of radically different embodiments.

Speaker.
Michael Levin (Director at Allen Discovery Center at Tufts University)

Moderator.
Louisa Hillegaart (Founder’s Associate, AI House Davos)

© AI House Davos 2025

Why comparisons between AI and human intelligence miss the point

AI systems, by contrast, do not cooperate, negotiate meaning, form social bonds or engage in shared moral reasoning. They process information in isolation, responding to prompts without awareness, intention or accountability.

Embodiment and social understanding matter

Human intelligence is also embodied. Our thinking is shaped by physical experience, emotion and social interaction. Developmental psychology shows that learning begins in infancy through touch, movement, imitation and shared attention with others. These embodied experiences ground abstract reasoning later in life.

/* */