Toggle light / dark theme

‘Embodied’ AI in virtual reality improves programming student confidence

Researchers have found that giving AI “peers” in virtual reality (VR) a body that can interact with the virtual environment can help students learn programming. Specifically, the researchers found students were more willing to accept these “embodied” AI peers as partners, compared to voice-only AI, helping the students better engage with the learning experience.

“Using AI agents in a VR setting for teaching students programming is a relatively recent development, and this proof-of-concept study was meant to see what kinds of AI agents can help students learn better and work more effectively,” says Qiao Jin, corresponding author of a paper on the work and an assistant professor of computer science at North Carolina State University.

“Peer learning is widespread in the programming field, as it helps students engage in the . For this work, we focused on ‘pAIr’ learning, where the programming peer is actually an AI agent. And the results suggest that embodying AI in the VR environment makes a real difference for pAIr learning.”

Scientists unveil breakthrough pixel that could put holograms on your smartphone

A team at the University of St Andrews has unlocked a major step toward true holographic displays by combining OLEDs with holographic metasurfaces. Unlike traditional laser-based holograms, this compact and affordable method could transform smart devices, entertainment, and even virtual reality. The breakthrough allows entire images to be generated from a single OLED pixel, removing long-standing barriers and pointing to a future of lightweight, miniaturized holographic technology.

CEA-Leti to Present Breakthrough Toward Ultra-Compact, High-Resolution AR/VR Displays at MicroLED Connect Conference

Interesting.


GRENOBLE, France – Sept. 16, 2025 – CEA-Leti and the Centre for Research on Heteroepitaxy and its Applications (CRHEA) today announced R&D results that have cleared a path toward full-color microdisplays based on a single material system, a long-standing goal for augmented and virtual reality (AR/VR) technologies.

The project, presented in a paper published in Nature Communications Materials, developed a technique for growing high-quality InGaN-based quantum wells on sub-micron nanopyramids, enabling native emission of red, green, and blue (RGB) light from a single material system. Titled “Regular Red-Green-Blue InGaN Quantum Wells With In Content Up To 40% Grown on InGaN Nanopyramids”, the paper will be presented at the MicroLED Connect Conference on Sept. 24, in Eindhoven, the Netherlands.

Microdisplays for immersive devices require bright RGB sub-pixels smaller than 10 × 10 microns. According to the paper, “the use of III-nitride materials promises high efficiency micro-light emitting diodes (micro-LEDs) compared to their organic counterparts. However, for such a pixel size, the pick and place process is no longer suitable for combining blue and green micro-LEDs from III-nitrides and red micro-LEDs from phosphide materials on the same platform.” Red-emitting phosphide micro-LEDs also suffer from efficiency losses at small sizes, while color conversion methods face challenges in deposition precision and stability.

Engineers send a wireless curveball to deliver massive amounts of data

High frequency radio waves can wirelessly carry the vast amount of data demanded by emerging technology like virtual reality, but as engineers push into the upper reaches of the radio spectrum, they are hitting walls. Literally.

Ultrahigh frequency bandwidths are easily blocked by objects, so users can lose transmissions walking between rooms or even passing a bookcase.

Now, researchers at Princeton Engineering have developed a machine-learning system that could allow ultrahigh frequency transmissions to dodge those obstacles. In an article in Nature Communications, the researchers unveiled a system that shapes transmissions to avoid obstacles coupled with a neural network that can rapidly adjust to a complex and dynamic environment.

Nissan confirms design studio data breach claimed by Qilin ransomware

Nissan Japan has confirmed to BleepingComputer that it suffered a data breach following unauthorized access to a server of one of its subsidiaries, Creative Box Inc. (CBI).

This came in response to the Qilin ransomware group’s claims that they had stolen four terabytes of data from CBI, including 3D vehicle design models, internal reports, financial documents, VR design workflows, and photos.

“On August 16, 2025, suspicious access was detected on the data server of Creative Box Inc. (CBI), a company contracted by Nissan for design work,” stated a Nissan spokesperson to BleepingComputer.

AI tech breathes life into virtual companion animals

Researchers at UNIST have developed an innovative AI technology capable of reconstructing highly detailed three-dimensional (3D) models of companion animals from a single photograph, enabling realistic animations. This breakthrough allows users to experience lifelike digital avatars of their companion animals in virtual reality (VR), augmented reality (AR), and metaverse environments.

Identifying a compass in the human brain

Zhengang Lu and Russell Epstein, from the University of Pennsylvania, led a study to explore how people maintain their sense of direction while navigating naturalistic, virtual reality cities.

As reported in their JNeurosci paper, the researchers collected neuroimaging data while 15 participants performed a taxi-driving task in a virtual reality city. Two represented a forward-facing direction as people moved around. This was consistent across variations of the city with different visual features.

The signal was also consistent across different phases of the task (i.e., picking up a passenger versus driving a passenger to their drop-off location) and various locations in the city. Additional analyses suggest that these brain regions represent a broad range of facing directions by keeping track of direction relative to the north–south axis of the environment.

/* */