Toggle light / dark theme

Is Consciousness a Field? | Robert Lawrence Kuhn

Go to https://porkbun.com/EventHorizonBun to get $1 off your next desired domain name at Porkbun!

Why is there something rather than nothing? Robert Lawrence Kuhn, creator of Closer To Truth, joins John Michael Godier to explore one of the most profound questions in science and philosophy. The discussion moves through materialism, idealism, panpsychism, and quantum perspectives, asking whether consciousness is merely a byproduct of evolution or a fundamental aspect of reality, and what that could mean for the universe, artificial intelligence, and the nature of mind. Kuhn discusses his recent paper, A Landscape of Consciousness: Toward a Taxonomy of Explanations and Implications, which maps the full range of consciousness theories and explores their broader significance.

Links:
Closer to Truth.
https://www.youtube.com/c/CloserToTruthTV

Homepage

A landscape of consciousness: Toward a taxonomy of explanations and implications by Robert Lawrence Kuhn https://www.sciencedirect.com/science/article/pii/S0079610723001128?via%3Dihub.

Seeing the consciousness forest for the trees by Àlex Gómez-Marín.
https://iai.tv/articles/seeing-the-consciousness-forest-for-the-trees-auid-2901

00:00:00 Introduction to Robert Lawrence Kuhn and consciousness.

AI & Cancer: What Worked, What Failed, and Why It Matters

In this episode of The Moss Report, Ben Moss sits down with Dr. Ralph Moss to explore the real-world pros and cons of using artificial intelligence in cancer research and care.

From AI-generated health advice to PubMed citations that don’t exist, this honest conversation covers what AI tools are getting right—and where they can dangerously mislead.

Dr. Moss shares the results of his own AI test across five major platforms, exposing their strengths and surprising failures.

Whether you’re a cancer patient, caregiver, or simply curious about how AI is shaping the future of medicine, this episode is essential listening.

Links and Resources:

🌿 The Moss Method – Fight Cancer Naturally – (Paperback, Hardcover, Kindle) https://amzn.to/4dGvVjp.

A new transformer architecture emulates imagination and higher-level human mental states

The advancement of artificial intelligence (AI) and the study of neurobiological processes are deeply interlinked, as a deeper understanding of the former can yield valuable insight about the other, and vice versa. Recent neuroscience studies have found that mental state transitions, such as the transition from wakefulness to slow-wave sleep and then to rapid eye movement (REM) sleep, modulate temporary interactions in a class of neurons known as layer 5 pyramidal two-point neurons (TPNs), aligning them with a person’s mental states.

These are interactions between information originating from the external world, broadly referred to as the receptive field (RF1), and inputs emerging from internal states, referred to as the contextual field (CF2). Past findings suggest that RF1 and CF2 inputs are processed at two distinct sites within the neurons, known as the basal site and apical site, respectively.

Current AI algorithms employing attention mechanisms, such as transformers, perceiver and flamingo models, are inspired by the capabilities of the human brain. In their current form, however, they do not reliably emulate high-level perceptual processing and the imaginative states experienced by humans.

3DLLM-Mem: Long-Term Spatial-Temporal Memory for Embodied 3D Large Language Model

Humans excel at performing complex tasks by leveraging long-term memory across temporal and spatial experiences. In contrast, current Large Language Models (LLMs) struggle to effectively plan and act in dynamic, multi-room 3D environments. We posit that part of this limitation is due to the lack of proper 3D spatial-temporal memory modeling in LLMs. To address this, we first introduce 3DMem-Bench, a comprehensive benchmark comprising over 26,000 trajectories and 2,892 embodied tasks, question-answering and captioning, designed to evaluate an agent’s ability to reason over long-term memory in 3D environments. Second, we propose 3DLLM-Mem, a novel dynamic memory management and fusion model for embodied spatial-temporal reasoning and actions in LLMs.

Quantum computers may crack RSA encryption with fewer qubits than expected

A team of researchers at AI Google Quantum AI, led by Craig Gidney, has outlined advances in quantum computer algorithms and error correction methods that could allow such computers to crack Rivest–Shamir–Adleman (RSA) encryption keys with far fewer resources than previously thought. The development, the team notes, suggests encryption experts need to begin work toward developing next-generation encryption techniques. The paper is published on the arXiv preprint server.

RSA is an encryption technique developed in the late 1970s that involves generating public and private keys; the former is used for encryption and the latter decryption. Current standards call for using a 2,048-bit encryption key. Over the past several years, research has suggested that quantum computers would one day be able to crack RSA encryption, but because quantum development has been slow, researchers believed that it would be many years before it came to pass.

Some in the field have accepted a theory that a quantum computer capable of cracking such codes in a reasonable amount of time would have to have at least 20 million qubits. In this new work, the team at Google suggests it could theoretically be done with as few as a million qubits—and it could be done in a week.

How coffee affects a sleeping brain

Caffeine is not only found in coffee, but also in tea, chocolate, energy drinks and many soft drinks, making it one of the most widely consumed psychoactive substances in the world.

In a study published in Communications Biology, a team of researchers from Université de Montréal shed new light on how caffeine can modify sleep and influence the brain’s recovery—both physical and cognitive—overnight.

The research was led by Philipp Thölke, a research trainee at UdeM’s Cognitive and Computational Neuroscience Laboratory (CoCo Lab), and co-led by the lab’s director, Karim Jerbi, a and researcher at Mila–Quebec AI Institute.