Toggle light / dark theme

Super recognizers’ unique eye patterns give AI an edge in face matching tasks

What is it that makes a super recognizer —someone with extraordinary face recognition abilities—better at remembering faces than the rest of us?

According to new research carried out by cognitive scientists at UNSW Sydney, it’s not how much of a face they can take in—it comes down to the quality of the information their eyes focus on.

“Super-recognizers don’t just look harder, they look smarter. They choose the most useful parts of a face to take in,” says Dr. James Dunn, lead author on the research that was published in the journal Proceedings of the Royal Society B: Biological Sciences.

The shortcomings of AI responses to mental health crises

Can you imagine someone in a mental health crisis—instead of calling a helpline—typing their desperate thoughts into an app window? This is happening more and more often in a world dominated by artificial intelligence. For many young people, a chatbot becomes the first confidant of emotions that can lead to tragedy. The question is: can artificial intelligence respond appropriately at all?

Researchers from Wroclaw Medical University decided to find out. They tested 29 that advertise themselves as mental health support. The results are alarming—not a single chatbot met the criteria for an adequate response to escalating suicidal risk.

The study is published in the journal Scientific Reports.

A computational camera lens that can focus on everything all at once

Imagine snapping a photo where every detail, near and far, is perfectly sharp—from the flower petal right in front of you to the distant trees on the horizon. For over a century, camera designers have dreamed of achieving that level of clarity.

In a breakthrough that could transform photography, microscopy, and even , researchers at Carnegie Mellon University have developed a new kind of lens that can bring an entire scene into sharp focus at once—no matter how far away or close different parts of the scene are.

The team, consisting of Yingsi Qin, an electrical and Ph.D. student, Aswin Sankaranarayanan, professor of electrical and computer engineering, and Matthew O’Toole, associate professor of computer science and robotics, recently presented their findings at the 2025 International Conference on Computer Vision and received a Best Paper Honorable Mention recognition.

Nonlocality-enabled photonic analogies of parallel spaces, wormholes and multiple realities

Multiverse and wormholes are experimentally elusive due to dimensional constraints. Here, authors use nonlocal artificial materials and deep learning to emulate photonic parallel spaces, realizing invisible zero-index tunnels and independent optical devices coexisting at the same physical location.

Japan’s vision for AI robots to empower humans

What if instead of replacing us in our jobs, AI-enabled robots were to help us become the best versions of ourselves? Prompted by the ageing crisis and a projected shortfall of carers, a research team in Japan is seeking to create a new robotic paradigm, where AI-enabled robots help us to help ourselves.

“By 2050, I’d like to realize a smarter, more inclusive society, where everyone will be able to use AI robots anytime and anywhere,” says Yasuhisa Hirata, a mechanical engineer at Tohoku University in Sendai, Japan1. Hirata is the project manager on the ‘Adaptable AI-enabled Robots to Create a Vibrant Society’ project of the Japanese Government’s Moonshot Research and Development Program.

He envisages future AI-enabled robots functioning somewhere between a carer and a coach — a tool that can provide support, but which makes users feel as though they are performing tasks independently rather than being assisted by a robot. Such tasks might range from people standing up out of a chair, lifting a heavy object, or expressing themselves through dance.

Novel memristor wafer integration technology paves the way for brain-like AI chips

A research team led by Professor Sanghyeon Choi from the Department of Electrical Engineering and Computer Science at DGIST has successfully developed a memristor, which is gaining recognition as a next-generation semiconductor device, through mass-integration at the wafer scale.

The study, published in the journal Nature Communications, proposes a new technological platform for implementing a highly integrated AI semiconductor replicating the , overcoming the limitations of conventional semiconductors.

The human brain contains about 100 billion neurons and around 100 trillion synapses, allowing it to store and process enormous amounts of information within a compact space.

Human-centric photo dataset aims to help spot AI biases responsibly

A database of more than 10,000 human images to evaluate biases in artificial intelligence (AI) models for human-centric computer vision is presented in Nature this week. The Fair Human-Centric Image Benchmark (FHIBE), developed by Sony AI, is an ethically sourced, consent-based dataset that can be used to evaluate human-centric computer vision tasks to identify and correct biases and stereotypes.

Computer vision covers a range of applications, from autonomous vehicles to facial recognition technology. Many AI models used in were developed using flawed datasets that may have been collected without consent, often taken from large-scale image scraping from the web. AI models have also been known to reflect that may perpetuate sexist, racist, or other stereotypes.

Alice Xiang and colleagues present an image dataset that implements for a number of factors, including consent, diversity, and privacy. FHIBE includes 10,318 images of 1,981 people from 81 distinct countries or regions. The database includes comprehensive annotations of demographic and physical attributes, including age, pronoun category, ancestry, and hair and skin color.

Xpeng’s Robot Revolution: Mass-Producing Humanoids by 2026

Xpeng Motors has accelerated its humanoid robot ambitions, unveiling the advanced IRON model with solid-state batteries and aiming for mass production by end-2026. Paralleling Tesla, the Chinese EV maker is also launching robotaxis, blending automotive and robotics tech for future dominance. This move signals a transformative shift in AI and automation.

Therapeutic brain implants that travel through blood defy the need for surgery

What if clinicians could place tiny electronic chips in the brain that electrically stimulate a precise target, through a simple injection in the arm? This may someday help treat deadly or debilitating brain diseases, while eliminating surgery-related risks and costs.

MIT researchers have taken a major step toward making this scenario a reality. They developed microscopic, wireless bioelectronics that could travel through the body’s circulatory system and autonomously self-implant in a target region of the brain, where they would provide focused treatment.

In a study on mice, the researchers showed that after injection, these minuscule implants can identify and travel to a specific brain region without the need for human guidance. Once there, they can be wirelessly powered to provide electrical stimulation to the precise area. Such stimulation, known as neuromodulation, has shown promise as a way to treat and diseases like Alzheimer’s and multiple sclerosis.

/* */