For this episode, I’m joined by Rick Tumlinson, co-founder of the Space Frontier Foundation and one of the most influential figures in the commercial space industry.
In this episode, we slice the conversation into four categories: the social history of the space movement and how we got here; the business of space and the astropolitics shaping who controls the final frontier; the genetics and ethics of humanity becoming a multi-planetary species; and the deeper philosophy of why leaving Earth isn’t just raw and blind ambition but something closer to destiny (for some people).
Timestamps: 0:00 Social History. 30:19 Business and Astropolitics. 45:20 Genetics and Ethics. 56:02 Philosophical.
Can AI really be moral — or does it just produce moral-sounding answers? Wendell Wallach, co-author of Moral Machines, joins me to discuss machine ethics, moral motivation, AI governance, and why controlling AI may not be enough.
The race to build smarter artificial intelligence has taken an unexpected philosophical turn after Google DeepMind quietly hired an in-house philosopher to investigate the potential for machine consciousness…
…DeepMind is now integrating philosophical reasoning directly into its research pipeline rather than treating ethics as an external concern. This move suggests that Big Tech is no longer viewing sentience as a science-fiction trope but as a technical and moral hurdle, thereby witnessing a transition from building tools to questioning the nature of those tools themselves.
The Google DeepMind philosopher role focuses on the machine sentience debate, aiming to define what it means for a digital system to ‘feel’ or ‘experience’
This internal appointment comes at a time when large language models are becoming increasingly indistinguishable from human interlocutors. While most researchers maintain that these systems are mere statistical predictors, the boundary is thinning. The decision to bring a philosopher into the core development team indicates that Google expects its path toward artificial general intelligence to raise profound questions about awareness and machine rights.
Google DeepMind has hired an in-house philosopher to explore the boundaries of machine consciousness and ethics. This move follows years of controversy surrounding AI sentience and the limits of large language models.
Half of the participants received actual stimulation aimed at the ventromedial prefrontal cortex. The other half received a fake version of the treatment, known as a sham stimulation. After the procedure, all participants completed the same card game and judgment exercises.
The people who received the real brain stimulation showed a wider gap between their behavior and their judgments. By disrupting the normal function of the brain region, the researchers successfully made people more hypocritical. This proved that the ventromedial prefrontal cortex directly controls moral consistency.
These results suggest that moral consistency is not an automatic trait. It is a biological process that relies on the brain’s ability to sync up different types of information. “Our findings suggest that we should treat moral consistency like a skill that can be strengthened through deliberate decision making,” says senior author Hongwen Song of the University of Science and Technology of China.
That hasn’t stopped some from exploring the idea as part of a secretive effort to realize an alternative to anti-aging tech that sounds like it was ripped straight out of a dystopian science fiction novel. A billionaire-backed stealth startup, called R3 Bio, recently announced that it was raising money to develop non-sentient monkey “organ sacks,” as Wired reported last week, an eyebrow-raising alternative to animal testing. Such structures would contain all typical organs excluding the brain, ultimately serving as a source for donor organs and tissues.
But according to a sprawling followup investigation by MIT Technology Review, R3 Bio’s founders secretly have a far more ambitious goal in mind: creating entire “brainless clones” of the human body that aging or ill individuals could one day transplant their brain into. One advantage of not developing the brain in the donor bodies, albeit a ghoulish one: such a brain-free clone would neatly circumvent certain moral conundrums over the concept.
Still, to call the idea ethically fraught would be a vast understatement. Despite an insider likening a pitch they heard from R3’s founder, John Schloendorn, to a “close encounter of the third kind” with “Dr. Strangelove” in an interview with Tech Review, the company has since distanced itself from the idea of brainless human clones.
Dr. discusses one of the most provocative frontiers in technology: the automation of moral judgement — in his talk focusses on outcomes of a comparative moral Turing test (AI outperforms humans across a range of metrics), as well as AI assisted medical triage!
Link in reply🔗
Eyal Aharoni
Dr. Eyal Aharoni (Georgia State University) to the Future Day 2026 stage to discuss one of the most provocative frontiers in technology: the automation of moral judgement.
Breaking the Moral Turing Test: Studies of human attribution and deference to AI moral judgment and decision-making.
Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!
0:00 Intro. 0:37 What is consciousness? Phenomenology — functionalism & panpsychism. 1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity. 3:20 Minds are not states — they are processes. We don’t see causal filtering in tables. 5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism. 9:49 Methodological humility about armchair philosophy of mind. 12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat. 16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well? 22:35 Why stepping outside yourself is powerful — seeing. 25:12 Are AIs born enlightened? 26:25 Are LLMs AGI yet? What’s still missing. 28:16 AI, hybrid minds, and the limits of human augmentation. 32:32 Can minds be extended — in humans, dogs, and cats? 36:19 Why human language may not be open-ended enough. 39:41 Why AI is so data-hungry — and why better algorithms must exist. 43:39 Why better representations matter more than raw compute (grokking was surprising) 48:46 How babies build a world model from touch and perception. 51:05 What comes after copilots: agent teams, multimodality and new AI workflows. 55:32 Can AI help us discover new forms of taste and aesthetics. 59:49 Using AI to learn art history and invent a transhumanist aesthetic. 1:01:47 When AI helps everyone looks professional, what still counts as real skill? 1:03:56 What happens when the self starts to merge with AI 1:05:43 How AI changes the way we think and create. 1:08:10 What happens when AI starts shaping human relationships. 1:11:18 Why feeling in control can matter more than being right. 1:12:58 Why intelligence without wisdom is very dangerous. 1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation? 1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere. 1:24:02 10 years to the singularity? 1:25:27 AI, coordination and the corruption problem. 1:29:47 Can AI become more moral than us (humans)? and if so, should it? 1:34:31 Why pluralism still leaves moral collisions unresolved. 1:34:31 Traversing the landscape of norms (value) 1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view) 1:43:08 Moral realism, evolution & game-theoretic symmetries. 1:48:01 Is there a global optimum of moral coordination? Is that god? 1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan. 1:59:36 Will superintelligences converge into a cosmic singleton?
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P…
Though previous studies have identified brain regions that are involved in moral behavior and moral judgement, little is known about how brain activity underpins moral inconsistency.
To identify brain regions associated with moral inconsistency, the researchers used fMRI imaging to scan people’s brains during a task that required them to weigh honesty and profit. Participants could earn more money by being dishonest, but they were also asked to rate their own behavior on a 10-point scale from “extremely immoral” to “extremely moral.” The team also monitored the participants’ brain activity while they judged the morality of other people undertaking the same task.
In people who were morally consistent—meaning, they judged themselves and others by the same moral standards—the vmPFC was activated similarly during both the behavioral and judgement tasks. However, in morally inconsistent participants—those who judged other people’s cheating as immoral but rated their own cheating more leniently—the vmPFC was less active during the behavioral task and less connected to other brain regions involved in decision making and morality.
To examine whether vmPFC activity plays a causal role in moral inconsistency, the researchers stimulated some participants’ vmPFCs via a non-invasive method called transcranial temporal interference stimulation (tTIS) before they undertook the behavioral and judging tasks. They showed that vmPFC stimulation resulted in higher levels of moral inconsistency compared to participants who received mock stimulation.
These results suggest that people who are morally inconsistent don’t make use of their vmPFC to integrate information when making behavioral decisions, the researchers say. “Individuals exhibiting moral inconsistency are not necessarily blind to their own moral principles; they are just biologically failing to consider and apply them in their own moral behavior,” says the author. ScienceMission sciencenewshighlights https://sciencemission.com/Moral-inconsistency
In this really interesting essay, Michalon et al discuss defining Alzheimer’s disease in response to recent discussions on revising the definition and diagnostic criteria for the condition. The essay provides interesting historical context to the debate.
Recent revisions of Alzheimer’s Disease (AD) definitions by two leading research groups—the Alzheimer’s Association and the International Working Group—reflect divergent approaches: the former promotes a strictly biological definition, while the latter promotes a clinicalbiological construct. We contend that this emerging controversy is not merely semantic, but scientifically, clinically, and politically significant. Drawing on philosophical tools and situating the current debate within a broader historical context from the reconceptualization of AD in the 1970s onwards, we explore how definitions can serve as transformative instruments, acting as strategic bets that reshape scientific fields and clinical practices. Ultimately, we draw from the AD case study to argue for a critical reflection on the risks and promises of such definitional acts. We also propose a renewed attention to the ‘ethics of stipulating’ in the field of contemporary biomedical sciences.
In response to advances in diagnostics and therapeutics, two major research groups specialising in Alzheimer’s disease (AD) have recently revised their definition and diagnostic criteria for the condition. While they concur on certain aspects—most notably, the centrality of amyloid and tau pathologies—the two groups have proposed different types of definition. The Alzheimer’s Association (AA) group asserts the following fundamental principle: “AD is defined by its unique neuropathologic findings; therefore, detection of AD neuropathologic change by biomarkers is equivalent to diagnosing the disease” 1(p.5145). This definition regards specific biological changes as the unique defining feature rather than a joint characteristic, together with specific symptoms, of a disease. In this framework, asymptomatic individuals can be diagnosed with ‘preclinical AD’
As part of Future Day 2026, we hosted a conversation between two of the most provocative minds in AGI – Ben Goertzel and Hugo de Garis (with Adam Ford as moderator/provocateur) – to tackle the ultimate existential question: Is an Artilect War inevitable, and should humanity accept becoming the “number two” species?
The discussion will build upon last years discussion between Ben and Hugo on AGI and the Singularity.
It will explore the idea of human transcendence. If we can’t beat them, do we join them?
Will humanity transcend into a Jupiter brain quectotech utility fog?
Is the Artilect War the inevitable conclusion of biological intelligence? Or can we find a path toward existing in a universe that still finds us aesthetically pleasing?