Menu

Blog

Archive for the ‘augmented reality’ category: Page 7

Jan 9, 2024

Space Force taps Microsoft to build cloud-based, simulated space environment

Posted by in categories: augmented reality, space

The Space Force announced Friday that it has given Microsoft a contract to continue work on a simulated environment where guardians can train, test new capabilities and interact with digital copies of objects in orbit.

Under the $19.8 million, one-year contract from Space Systems Command (SSC), Microsoft will develop the Integrated, Immersive, Intelligent Environment (I3E) — an augmented reality space simulation powered by the company’s HoloLens headsets. The training tool is a successor to the service’s Immersive Digital Facility (IDF) prototype developed in 2023, according to a press release.

The contract period began Dec. 1, and the deal includes a reserved scope for an additional three years of work, per the release.

Jan 5, 2024

Square Enix plans ‘aggressive’ use of AI to create new forms of content

Posted by in categories: augmented reality, business, robotics/AI, virtual reality

Generative AI provoked a lot of discussion last year around images, text and video, but it may soon affect the gaming industry as well. Square Enix said it plans to be “aggressively applying” AI and other cutting-edge tech in 2024 to “create new forms of content,” according to president Takashi Kiryu’s New Year’s letter.

“Artificial intelligence (AI) and its potential implications had for some time largely been subjects of academic debate,” he said. “However, the introduction of ChatGPT, which allows anyone to easily produce writing or translations or to engage in text-based dialogue, sparked the rapid spread of generative AIs. I believe that generative AI has the potential not only to reshape what we create, but also to fundamentally change the processes by which we create, including programming.”

The company will start by using it to improve productivity in development and assist in marketing. “In the longer term, we hope to leverage those technologies to create new forms of content for consumers, as we believe that technological innovation represents business opportunities,” Kiryu added. Square Enix also plans to build more immersive AR and VR experiences, including “new forms of content that fuse the real world and virtual worlds.”

Dec 20, 2023

These minuscule pixels are poised to take augmented reality by storm

Posted by in categories: augmented reality, virtual reality

LEDs and their organic counterparts are getting truly tiny. This could be the moment AR and VR companies have been waiting for.

Dec 19, 2023

This AI Paper Introduces a Groundbreaking Method for Modeling 3D Scene Dynamics Using Multi-View Videos

Posted by in categories: augmented reality, physics, robotics/AI

NVFi tackles the intricate challenge of comprehending and predicting the dynamics within 3D scenes evolving over time, a task critical for applications in augmented reality, gaming, and cinematography. While humans effortlessly grasp the physics and geometry of such scenes, existing computational models struggle to explicitly learn these properties from multi-view videos. The core issue lies in the inability of prevailing methods, including neural radiance fields and their derivatives, to extract and predict future motions based on learned physical rules. NVFi ambitiously aims to bridge this gap by incorporating disentangled velocity fields derived purely from multi-view video frames, a feat yet unexplored in prior frameworks.

The dynamic nature of 3D scenes poses a profound computational challenge. While recent advancements in neural radiance fields showcased exceptional abilities in interpolating views within observed time frames, they fall short in learning explicit physical characteristics such as object velocities. This limitation impedes their capability to foresee future motion patterns accurately. Current studies integrating physics into neural representations exhibit promise in reconstructing scene geometry, appearance, velocity, and viscosity fields. However, these learned physical properties are often intertwined with specific scene elements or necessitate supplementary foreground segmentation masks, limiting their transferability across scenes. NVFi’s pioneering ambition is to disentangle and comprehend the velocity fields within entire 3D scenes, fostering predictive capabilities extending beyond training observations.

Researchers from The Hong Kong Polytechnic University introduce a comprehensive framework NVFi encompassing three fundamental components. First, a keyframe dynamic radiance field facilitates the learning of time-dependent volume density and appearance for every point in 3D space. Second, an interframe velocity field captures time-dependent 3D velocities for each point. Finally, a joint optimization strategy involving both keyframe and interframe elements, augmented by physics-informed constraints, orchestrates the training process. This framework offers flexibility in adopting existing time-dependent NeRF architectures for dynamic radiance field modeling while employing relatively simple neural networks, such as MLPs, for the velocity field. The core innovation lies in the third component, where the joint optimization strategy and specific loss functions enable precise learning of disentangled velocity fields without additional object-specific information or masks.

Dec 19, 2023

VR market keeps shrinking even as Meta pours billions of dollars a quarter into metaverse

Posted by in categories: augmented reality, virtual reality

Despite the company’s commitment to making its founder’s dream come true, the virtual reality market is contracting.

Sales of VR headsets and augmented reality glasses in the U.S. plummeted nearly 40% to $664 million in 2023, as of Nov. 25, according to data shared with CNBC by research firm Circana. That’s a much steeper drop than last year, when sales of AR and VR devices slid 2% to $1.1 billion.

The two-year decline underscores Meta’s continuing challenge in bringing the immersive technology out of a niche gaming corner and into the mainstream. While Zuckerberg said, in announcing Facebook’s pivot to Meta in late 2021, that it would likely take a decade to reach a billion users, he may need to start showing more optimistic data to appease a shareholder base that’s been critical of the company’s hefty and risky investments.

Dec 15, 2023

China’s Air Force simulates warplane maintenance with Microsoft headsets

Posted by in categories: augmented reality, holograms, military

A video released by a Chinese state broadcaster shows the use of Microsoft Hololens 2 to simulate maintenance on a warplane.


The Chinese military is reported to be utilizing mixed-reality goggles manufactured by Microsoft for equipment maintenance, as shown in a video released by a state broadcaster.

Continue reading “China’s Air Force simulates warplane maintenance with Microsoft headsets” »

Dec 6, 2023

Introducing Ego-Exo4D: A foundational dataset for research on video learning and multimodal perception

Posted by in categories: augmented reality, media & arts, robotics/AI, transportation

📸 Watch this video on Facebook https://www.facebook.com/share/v/NNeZinMSuGPtQDXL/?mibextid=i5uVoL


Working together as a consortium, FAIR or university partners captured these perspectives with the help of more than 800 skilled participants in the United States, Japan, Colombia, Singapore, India, and Canada. In December, the consortium will open source the data (including more than 1,400 hours of video) and annotations for novel benchmark tasks. Additional details about the datasets can be found in our technical paper. Next year, we plan to host a first public benchmark challenge and release baseline models for ego-exo understanding. Each university partner followed their own formal review processes to establish the standards for collection, management, informed consent, and a license agreement prescribing proper use. Each member also followed the Project Aria Community Research Guidelines. With this release, we aim to provide the tools the broader research community needs to explore ego-exo video, multimodal activity recognition, and beyond.

How Ego-Exo4D works.

Ego-Exo4D focuses on skilled human activities, such as playing sports, music, cooking, dancing, and bike repair. Advances in AI understanding of human skill in video could facilitate many applications. For example, in future augmented reality (AR) systems, a person wearing smart glasses could quickly pick up new skills with a virtual AI coach that guides them through a how-to video; in robot learning, a robot watching people in its environment could acquire new dexterous manipulation skills with less physical experience; in social networks, new communities could form based on how people share their expertise and complementary skills in video.

Nov 24, 2023

Future Business Tech

Posted by in categories: augmented reality, bioengineering, biological, blockchains, genetics, Ray Kurzweil, robotics/AI, singularity, transhumanism

This video explores the future of the world from 2030 to 10,000 A.D. and beyond…Watch this next video about the Technological Singularity: https://youtu.be/yHEnKwSUzAE.
🎁 5 Free ChatGPT Prompts To Become a Superhuman: https://bit.ly/3Oka9FM
🤖 AI for Business Leaders (Udacity Program): https://bit.ly/3Qjxkmu.
☕ My Patreon: https://www.patreon.com/futurebusinesstech.
➡️ Official Discord Server: https://discord.gg/R8cYEWpCzK

0:00 2030
12:40 2050
39:11 2060
49:57 2070
01:04:58 2080
01:16:39 2090
01:28:38 2100
01:49:03 2200
02:05:48 2300
02:20:31 3000
02:28:18 10,000 A.D.
02:35:29 1 Million Years.
02:43:16 1 Billion Years.

Continue reading “Future Business Tech” »

Nov 14, 2023

Glasses use sonar, AI to interpret upper body poses in 3D

Posted by in categories: augmented reality, health, robotics/AI, virtual reality, wearables

Throughout history, sonar’s distinctive “ping” has been used to map oceans, spot enemy submarines and find sunken ships. Today, a variation of that technology – in miniature form, developed by Cornell researchers – is proving a game-changer in wearable body-sensing technology.

PoseSonic is the latest sonar-equipped wearable from Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) lab. It consists of off-the-shelf eyeglasses outfitted with micro sonar that can track the wearer’s upper body movements in 3D through a combination of inaudible soundwaves and artificial intelligence (AI).

With further development, PoseSonic could enhance augmented reality and virtual reality, and track detailed physical and behavioral data for personal health, the researchers said.

Nov 11, 2023

What If We Became A Type I Civilization? 15 Predictions

Posted by in categories: augmented reality, bioengineering, biological, genetics, Ray Kurzweil, robotics/AI, singularity, transhumanism

This video explores what life would be like if we became a Type I Civilization. Watch this next video about the Technological Singularity: https://youtu.be/yHEnKwSUzAE.
🎁 5 Free ChatGPT Prompts To Become a Superhuman: https://bit.ly/3Oka9FM
🤖 AI for Business Leaders (Udacity Program): https://bit.ly/3Qjxkmu.
☕ My Patreon: https://www.patreon.com/futurebusinesstech.
➡️ Official Discord Server: https://discord.gg/R8cYEWpCzK

SOURCES:
https://www.futuretimeline.net.
• The Singularity Is Near: When Humans Transcend Biology (Ray Kurzweil): https://amzn.to/3ftOhXI
• The Future of Humanity (Michio Kaku): https://amzn.to/3Gz8ffA

Continue reading “What If We Became A Type I Civilization? 15 Predictions” »

Page 7 of 68First4567891011Last