Toggle light / dark theme

Researchers at Aalto University were looking for better ways to instruct dance choreography in virtual reality. The new WAVE technique they developed will be presented in May at the CHI conference for human-computer interaction research.

Previous techniques have largely relied on pre-rehearsal and simplification.

“In , it is difficult to visualize and communicate how a dancer should move. The is so multi-dimensional, and it is difficult to take in rich data in ,” says Professor Perttu Hämäläinen.

Astronomers have produced the largest 3D map of the universe, which can be explored in an interactive VR video. In the process, they’ve uncovered some tantalizing hints that our understanding of physics, including the ultimate fate of the cosmos, could be wrong.

The Dark Energy Spectroscopic Instrument (DESI) is a huge international project to map out the universe in three dimensions, which began collecting data in 2021. This early version of the map only includes data collected during the first year – 5.7 million galaxies and quasars out of the planned goal of 40 million. This data allows the scientists to peer as far as 11 billion light-years into deep space and time, providing a glimpse into the very early universe with an unprecedented precision of less than 1%.

With a view that zoomed-out, the cosmos resembles a colossal web, made up of bright strands of galaxies separated by unimaginably empty voids. If you feel up for an existential crisis, check out this VR fly-through video and remember that each of these blurry blobs of light is an entire galaxy, each containing millions of stars and billions of planets.

Scientists have created a method to produce 3D full-color holographic images using smartphone screens instead of lasers. This innovative technique, with additional advancements, holds the potential for augmented or virtual reality displays.

Whether augmented and virtual reality displays are being used for gaming, education, or other applications, incorporating 3D displays can create a more realistic and interactive user experience.

“Although holography techniques can create a very real-looking 3D representation of objects, traditional approaches aren’t practical because they rely on laser sources,” said research team leader Ryoichi Horisaki, from The University of Tokyo in Japan. “Lasers emit coherent light that is easy to control, but they make the system complex, expensive, and potentially harmful to the eyes.”

AR-Smart glasses: 2029. Will look like just a normal pair of sunglasses. All normal smartphone type features. Built in AI systems. Set up for some VR stuff. An built in earbud / mic, for calls, music, talking to Ai, etc… May need a battery pack, we ll see in 2029.


The smart glasses will soon come with a built-in assistant.

😗😁😘 agi yay 😀 😍


The pursuit of artificial intelligence that can navigate and comprehend the intricacies of three-dimensional environments with the ease and adaptability of humans has long been a frontier in technology. At the heart of this exploration is the ambition to create AI agents that not only perceive their surroundings but also follow complex instructions articulated in the language of their human creators. Researchers are pushing the boundaries of what AI can achieve by bridging the gap between abstract verbal commands and concrete actions within digital worlds.

Researchers from Google DeepMind and the University of British Columbia focus on a groundbreaking AI framework, the Scalable, Instructable, Multiworld Agent (SIMA). This framework is not just another AI tool but a unique system designed to train AI agents in diverse simulated 3D environments, from meticulously designed research labs to the expansive realms of commercial video games. Its universal applicability sets SIMA apart, enabling it to understand and act upon instructions in any virtual setting, a feature that could revolutionize how everyone interacts with AI.

Creating a versatile AI that can interpret and act on instructions in natural language is no small feat. Earlier AI systems were trained in specific environments, which limits their usefulness in new situations. This is where SIMA steps in with its innovative approach. Training in various virtual settings allows SIMA to understand and execute multiple tasks, linking linguistic instructions with appropriate actions. This enhances its adaptability and deepens its understanding of language in the context of different 3D spaces, a significant step forward in AI development.