Toggle light / dark theme

Bye bye, AR headsets?Mojo Vision, a California-based company that wants to make augmented reality (AR) capable smart contact lenses, has already conducted the first human trial of its technology. Last week, the company’s CEO Drew Perkins became the first person to use the contact lenses and shared his experience in a blog post.


Mojo Vision’s device design includes many firsts and now the prototype is good enough to be trialed. Is the future already here?

What else can deepfakes do?We’ve seen examples of deepfakes being used almost to change the course of history when a Zelensky footage emerged back in March and told the Ukrainian army to lay down arms amid the Russian invasion. Fortunately, it was sloppy, and the army didn’t buy that. And now, if you consider what happens when a post-covid world that birthed many remote job opportunities for digital nomads merges with AI, The FBI Internet Crime Complaint Center (IC3) has t… See more.


The Federal Bureau of Investigation (FBI) has warned that some people are using deepfakes to apply for remote tech jobs.

SpaceX’s Starlink provided the fastest satellite internet in the world.

Starlink has been equally praised in recent months for helping civilians in Ukraine and criticized for making astronomical work harder to the point it might endanger humanity.

There’s no denying the experience it provides is impressive, with one user recently telling IE it allowed him to live an enviable off-grid lifestyle with 300 watts of solar energy.

What happens when machines begin to question their origins?

In this short film created with generative art, we explore how artificial intelligence sees the universe, its creators, and its potential futures. I believe the emergence of artistic A.I. has touched off a new era for art that could be as profound as the first cave paintings, 50,000 years ago. If these artistic capabilities are possible after only a few decades of A.I., research, what will the next 50,000 years hold? What will we become?

Crafted by Melodysheep in collaboration with artificial intelligence.

Supported by the good people at Protocol Labs:

Summary: Researchers identified a novel brain network that includes the fronto-parietal networks and fusiform gyrus which helps with the encoding of visual mental imagery.

Source: Paris Brain Institute.

Every day, we call upon a unique capacity of our brain, visual mental imagery, which allows us to visualize images, objects or people ‘in our heads’. Based on the recent case of a patient with a specific brain lesion, Paolo Bartolomeo’s group (Inserm) in the PICNIC Lab at the Paris Brain Institute has identified a region that may be key in mental visualization.

Mobile robots are now being introduced into a wide variety of real-world settings, including public spaces, home environments, health care facilities and offices. Many of these robots are specifically designed to interact and collaborate with humans, helping them to complete hands-on physical tasks.

To improve the performance of on interactive and manual tasks, roboticists will need to ensure that they can effectively sense stimuli in their environment. In recent years, many engineers and material scientists have thus been trying to develop systems that can artificially replicate biological sensory processes.

Researchers at Scuola Superiore Sant’Anna, Ca’ Foscari University of Venice, Sapienza University of Rome and other institutes in Italy have recently used an artificial skin and a that could be used to improve the tactile capabilities of both existing and newly developed robots to replicate the function of the so-called Ruffini receptors. Their approach, introduced in a paper published in Nature Machine Intelligence, replicates the function of a class of cells located on the human superficial dermis (i.e., subcutaneous skin tissue), known as Ruffini receptors.