Toggle light / dark theme

How likely is it that we live in a simulation? Are virtual worlds real?

In this first episode of the 2nd Series we delve into the fascinating topic of virtual reality simulations and the extraordinary possibility that our universe is itself a simulation. For thousands of years some mystical traditions have maintained that the physical world and our separated ‘selves’ are an illusion, and now, only with the development of our own computer simulations and virtual worlds have scientists and philosophers begun to assess the statistical probabilities that our shared reality could in fact be some kind of representation rather than a physical place.
As we become more open to these possibilities, other difficult questions start to come into focus. How can we create a common language to talk about matter and energy, that bridges the simulated and simulating worlds. Who could have created such a simulation? Could it be an artificial intelligence rather than a biological or conscious being? Do we have ethical obligations to the virtual beings we interact with in our virtual worlds and to what extent are those beings and worlds ‘real’? The list is long and mind bending.

Fortunately, to untangle our thoughts on this, we have one of the best known philosophers of all things mind bending in the world, Dr. David Chalmers; who has just released a book ‘Reality+: virtual worlds and the problems of philosophy’ about this very topic. Dr. Chalmers is an Australian philosopher and cognitive scientist specialising in the areas of philosophy of mind and philosophy of language. He is a Professor of Philosophy and Neuroscience at New York University, as well as co-director of NYU’s Center for Mind, Brain and Consciousness. He’s the founder of the ‘Towards a Science of Consciousness Conference’ at which he coined the term in 1994 The Hard Problem of Consciousness, kicking off a renaissance in consciousness studies, which has been increasing in popularity and research output ever since.

Donate here: https://www.chasingconsciousness.net/episodes.

What we discuss in this episode:
00:00 Short Intro.
06:00 Synesthesia.
08:27 The science of knowing the nature of reality.
11:02 The Simulation Hypothesis explained.
15:25 The statistical probability evaluation.
18:00 Knowing for sure is beyond the reaches of science.
19:00 You’d only have to render the part you’re interacting with.
20:00 Clues from physics.
22:00 John Wheeler — ‘It from bit’
23:32 Eugene Wigner: measurement as a conscious observation.
27:00 Information theory as a useful but risky hold-all language tool.
34:30 Virtual realities are real and virtual interactions are meaningful.
37:00 Ethical approaches to Non-player Characters (NPC’s) and their rights.
38:45 Will advanced AI be conscious?
42:45 Is god a hacker in the universe up? Simulation Theology.
44:30 Simulation theory meets the argument for the existence of God from design.
51:00 The Hard problem of consciousness applies to AI too.
55:00 Testing AI’s consciousness with the Turing test.
59:30 Ethical value applied to immoral actions in virtual worlds.

References:

The development of increasingly sophisticated sensors can facilitate the advancement of various technologies, including robots, security systems, virtual reality (VR) equipment and sophisticated prosthetics. Multimodal tactile sensors, which can pick up different types of touch-related information (e.g., pressure, texture and type of material), are among the most promising for applications that can benefit from the artificial replication of the human sense of touch.

When exploring their surroundings, communicating with others and expressing themselves, humans can perform a wide range of body motions. The ability to realistically replicate these motions, applying them to human and humanoid characters, could be highly valuable for the development of video games and the creation of animations, content that can be viewed using virtual reality (VR) headsets and training videos for professionals.

Researchers at Peking University’s Institute for Artificial Intelligence (AI) and the State Key Laboratory of General AI recently introduced new models that could simplify the generation of realistic motions for human characters or avatars. The work is published on the arXiv preprint server.

Their proposed approach for the generation of human motions, outlined in a paper presented at CVPR 2025, relies on a data augmentation technique called MotionCutMix and a diffusion model called MotionReFit.

The use of virtual reality haptic simulators can enhance skill acquisition and reduce stress among dental students during preclinical endodontic training, according to a new study published in the International Endodontic Journal. The study was based on collaboration between the University of Eastern Finland, the University of Health Sciences and the University of Ondokuz Mayıs in Turkey as well as Grande Rio University in Brazil.

The study aimed to evaluate the influence of virtual reality (VR) haptic simulators on skill acquisition and stress reduction in endodontic preclinical education of dental students.

During preclinical training, dental students develop manual dexterity, psychomotor skills and confidence essential in clinical practice. VR and haptic technology are increasingly used alongside conventional methods, enabling more repetition and standardised feedback, among other things.

A new approach to streaming technology may significantly improve how users experience virtual reality and augmented reality environments, according to a study from NYU Tandon School of Engineering.

The research—presented in a paper at the 16th ACM Multimedia Systems Conference (ACM MMSys 2025) on April 1, 2025—describes a method for directly predicting visible content in immersive 3D environments, potentially reducing bandwidth requirements by up to 7-fold while maintaining visual quality.

The technology is being applied in an ongoing NYU Tandon project to bring point cloud video to dance education, making 3D dance instruction streamable on standard devices with lower bandwidth requirements.

Researchers have succeeded, for the first time, in displaying three-dimensional graphics in mid-air that can be manipulated with the hands. The team includes Doctor Elodie Bouzbib, from Public University of Navarra (UPNA), together with Iosune Sarasate, Unai Fernández, Manuel López-Amo, Iván Fernández, Iñigo Ezcurdia and Asier Marzo (the latter two, members of the Institute of Smart Cities).

“What we see in films and call holograms are typically volumetric displays,” says Bouzbib, the first author of the work. “These are graphics that appear in mid-air and can be viewed from various angles without the need for wearing virtual reality glasses. They are called true-3D graphics.

They are particularly interesting as they allow for the ‘come-and-interact’ paradigm, meaning that the users simply approach a device and start using it.

In the paper accompanying the launch of R1, DeepSeek explained how it took advantage of techniques such as synthetic data generation, distillation, and machine-driven reinforcement learning to produce a model that exceeded the current state-of-the-art. Each of these approaches can be explained another way as harnessing the capabilities of an existing AI model to assist in the training of a more advanced version.

DeepSeek is far from alone in using these AI techniques to advance AI. Mark Zuckerberg predicts that the mid-level engineers at https://fortune.com/company/facebook/” class=””>Meta may soon be replaced by AI counterparts, and that Llama 3 (his company’s LLM) “helps us experiment and iterate faster, building capabilities we want to refine and expand in Llama 4.” https://fortune.com/company/nvidia/” class=””>Nvidia CEO Jensen Huang has spoken at length about creating virtual environments in which AI systems supervise the training of robotic systems: “We can create multiple different multiverses, allowing robots to learn in parallel, possibly learning in 100,000 different ways at the same time.”

This isn’t quite yet the singularity, when intelligent machines autonomously self-replicate, but it is something new and potentially profound. Even amidst such dizzying progress in AI models, though, it’s not uncommon to hear some observers talk about the potential slowing of what’s called the “scaling laws”—the observed principles that AI models increase in performance in direct relationship to the quantity of data, power, and compute applied to them. The release from DeepSeek, and several subsequent announcements from other companies, suggests that arguments of the scaling laws’ demise may be greatly exaggerated. In fact, innovations in AI development are leading to entirely new vectors for scaling—all enabled by AI itself. Progress isn’t slowing down, it’s speeding up—thanks to AI.

From virtual reality to rehabilitation and communication, haptic technology has revolutionized the way humans interact with the digital world. While early haptic devices focused on single-sensory cues like vibration-based notifications, modern advancements have paved the way for multisensory haptic devices that integrate various forms of touch-based feedback, including vibration, skin stretch, pressure, and temperature.

Recently, a team of experts, including Rice University’s Marcia O’Malley and Daniel Preston, graduate student Joshua Fleck, alumni Zane Zook ‘23 and Janelle Clark ‘22 and other collaborators, published an in-depth review in Nature Reviews Bioengineering analyzing the current state of wearable multisensory , outlining its challenges, advancements, and real-world applications.

Haptic devices, which enable communication through touch, have evolved significantly since their introduction in the 1960s. Initially, they relied on rigid, grounded mechanisms acting as user interfaces, generating force-based feedback from virtual environments.

Accurate and robust 3D imaging of specular, or mirror-like, surfaces is crucial in fields such as industrial inspection, medical imaging, virtual reality, and cultural heritage preservation. Yet anyone who has visited a house of mirrors at an amusement park knows how difficult it is to judge the shape and distance of reflective objects.

This challenge also persists in science and engineering, where the accurate 3D imaging of specular surfaces has long been a focus in both optical metrology and computer vision research. While specialized techniques exist, their inherent limitations often confine them to narrow, domain-specific applications, preventing broader interdisciplinary use.

In a study published in the journal Optica, University of Arizona researchers from the Computational 3D Imaging and Measurement (3DIM) Lab at the Wyant College of Optica l Sciences present a novel approach that significantly advances the 3D imaging of specular surfaces.

Imagine navigating a virtual reality with contact lenses or operating your smartphone underwater: This and more could soon be a reality thanks to innovative e-skins.

A research team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has developed an that detects and precisely tracks magnetic fields with a single global sensor. This artificial skin is not only light, transparent and permeable, but also mimics the interactions of real skin and the brain, as the team reports in the journal Nature Communications.

Originally developed for robotics, e-skins imitate the properties of real skin. They can give robots a or replace lost senses in humans. Some can even detect chemical substances or magnetic fields. But the technology also has its limits. Highly functional e-skins are often impractical because they rely on extensive electronics and large batteries.