Toggle light / dark theme

From virtual reality to rehabilitation and communication, haptic technology has revolutionized the way humans interact with the digital world. While early haptic devices focused on single-sensory cues like vibration-based notifications, modern advancements have paved the way for multisensory haptic devices that integrate various forms of touch-based feedback, including vibration, skin stretch, pressure, and temperature.

Recently, a team of experts, including Rice University’s Marcia O’Malley and Daniel Preston, graduate student Joshua Fleck, alumni Zane Zook ‘23 and Janelle Clark ‘22 and other collaborators, published an in-depth review in Nature Reviews Bioengineering analyzing the current state of wearable multisensory , outlining its challenges, advancements, and real-world applications.

Haptic devices, which enable communication through touch, have evolved significantly since their introduction in the 1960s. Initially, they relied on rigid, grounded mechanisms acting as user interfaces, generating force-based feedback from virtual environments.

Accurate and robust 3D imaging of specular, or mirror-like, surfaces is crucial in fields such as industrial inspection, medical imaging, virtual reality, and cultural heritage preservation. Yet anyone who has visited a house of mirrors at an amusement park knows how difficult it is to judge the shape and distance of reflective objects.

This challenge also persists in science and engineering, where the accurate 3D imaging of specular surfaces has long been a focus in both optical metrology and computer vision research. While specialized techniques exist, their inherent limitations often confine them to narrow, domain-specific applications, preventing broader interdisciplinary use.

In a study published in the journal Optica, University of Arizona researchers from the Computational 3D Imaging and Measurement (3DIM) Lab at the Wyant College of Optica l Sciences present a novel approach that significantly advances the 3D imaging of specular surfaces.

Imagine navigating a virtual reality with contact lenses or operating your smartphone underwater: This and more could soon be a reality thanks to innovative e-skins.

A research team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has developed an that detects and precisely tracks magnetic fields with a single global sensor. This artificial skin is not only light, transparent and permeable, but also mimics the interactions of real skin and the brain, as the team reports in the journal Nature Communications.

Originally developed for robotics, e-skins imitate the properties of real skin. They can give robots a or replace lost senses in humans. Some can even detect chemical substances or magnetic fields. But the technology also has its limits. Highly functional e-skins are often impractical because they rely on extensive electronics and large batteries.

The device provides a range of sensations, such as vibrations, pressure, and twisting. A team of engineers led by Northwestern University has developed a new wearable device that stimulates the skin to deliver a range of complex sensations. This thin, flexible device gently adheres to the skin, offering more realistic and immersive sensory experiences. While it is well-suited for gaming and virtual reality (VR), the researchers also see potential applications in healthcare. For instance, the device could help individuals with visual impairments “feel” their surroundings or provide feedback to those with prosthetic limbs.

For over a century, galvanic vestibular stimulation (GVS) has been used as a way to stimulate the inner ear nerves by passing a small amount of current.

We use GVS in a two player escape the room style VR game set in a dark virtual world. The VR player is remote controlled like a robot by a non-VR player with GVS to alter the VR player’s walking trajectory. We also use GVS to induce the physical sensations of virtual motion and mitigate motion sickness in VR.

Brain hacking has been a futurist fascination for decades. Turns out, we may be able to make it a reality as research explores the impact of GVS on everything from tactile sensation to memory.

Misha graduated in June 2018 from the MIT Media Lab where she worked in the Fluid Interfaces group with Prof Pattie Maes. Misha works in the area of human-computer interaction (HCI), specifically related to virtual, augmented and mixed reality. The goal of her work is to create systems that use the entire body for input and output and automatically adapt to each user’s unique state and context. Misha calls her concept perceptual engineering, i.e., immersive systems that alter the user’s perception (or more specifically the input signals to their perception) and influence or manipulate it in subtle ways. For example, they modify a user’s sense of balance or orientation, manipulate their visual attention and more, all without the user’s explicit awareness, and in order to assist or guide their interactive experience in an effortless way.

The systems Misha builds use the entire body for input and output, i.e., they can use movement, like walking, or a physiological signal, like breathing as input, and can output signals that actuate the user’s vestibular system with electrical pulses, causing the individual to move or turn involuntarily. HCI up to now has relied upon deliberate, intentional usage, both for input (e.g., touch, voice, typing) and for output (interpreting what the system tells you, shows you, etc.). In contrast, Misha develops techniques and build systems that do not require this deliberate, intentional user interface but are able to use the body as the interface for more implicit and natural interactions.

Misha’s perceptual engineering approach has been shown to increase the user’s sense of presence in VR/MR, provide novel ways to communicate between the user and the digital system using proprioception and other sensory modalities, and serve as a platform to question the boundaries of our sense of agency and trust.

Could this VR experience change how you see the planet?


For many, constant bad news numbs our reaction to climate disasters. But research suggests that a new type of immersive storytelling about nature told through virtual reality (VR) can both build empathy and inspire us to act.

I’m crying into a VR headset. I’ve just watched a VR experience that tells the story of a young pangolin called Chestnut, as she struggles to survive in the Kalahari Desert. A vast, dusty landscape extends around me in all directions, and her armoured body seems vulnerable as she curls up, alone, to sleep. Her story is based on the life of a real pangolin that was tracked by scientists.

Chestnut hasn’t found enough to ants to eat, since insect numbers have dwindled due to climate change. Her sunny voice remains optimistic even as exhaustion takes over. In the final scenes, she dies, and I must clumsily lift my headset to dab my eyes.

Summary: New research indicates a strong link between high social media use and psychiatric disorders involving delusions, such as narcissism and body dysmorphia. Conditions like narcissistic personality disorder, anorexia, and body dysmorphic disorder thrive on social platforms, allowing users to build and maintain distorted self-perceptions without real-world checks.

The study highlights how virtual environments enable users to escape social scrutiny, intensifying delusional self-images and potentially exacerbating existing mental health issues. Researchers emphasize that social media isn’t inherently harmful, but immersive virtual environments coupled with real-life isolation can significantly amplify unhealthy mental states.

An international team of scientists developed augmented reality glasses with technology to receive images beamed from a projector, to resolve some of the existing limitations of such glasses, such as their weight and bulk. The team’s research is being presented at the IEEE VR conference in Saint-Malo, France, in March 2025.

Augmented reality (AR) technology, which overlays and virtual objects on an image of the real world viewed through a device’s viewfinder or , has gained traction in recent years with popular gaming apps like Pokémon Go, and real-world applications in areas including education, manufacturing, retail and health care. But the adoption of wearable AR devices has lagged over time due to their heft associated with batteries and electronic components.

AR glasses, in particular, have the potential to transform a user’s physical environment by integrating virtual elements. Despite many advances in hardware technology over the years, AR glasses remain heavy and awkward and still lack adequate computational power, battery life and brightness for optimal user experience.

Rodolfo Llinas tells the story of how he has developed bundles of nanowires thinner than spider webs that can be inserted into the blood vessels of human brains.

While these wires have so far only been tested in animals, they prove that direct communication with the deep recesses of the brain may not be so far off. To understand just how big of a breakthrough this is—US agents from the National Security Agency quickly showed up at the MIT laboratory when the wires were being developed.

What does this mean for the future? It might be possible to stimulate the senses directly — creating visual perceptions, auditory perceptions, movements, and feelings. Deep brain stimulation could create the ultimate virtual reality. Not to mention, direct communication between man and machine or human brain to human brain could become a real possibility.

Llinas poses compelling questions about the potentials and ethics of his technology.