Toggle light / dark theme

AI tool can detect missed Alzheimer’s diagnoses while reducing disparities

Researchers at UCLA have developed an artificial intelligence tool that can use electronic health records to identify patients with undiagnosed Alzheimer’s disease, addressing a critical gap in Alzheimer’s care: significant underdiagnosis, particularly among underrepresented communities.

The study appears in the journal npj Digital Medicine.

Squashing ‘fantastic bugs’ hidden in AI benchmarks

After reviewing thousands of benchmarks used in AI development, a Stanford team found that 5% could have serious flaws with far-reaching ramifications.

Each time an AI researcher trains a new model to understand language, recognize images, or solve a medical riddle, one big question remains: Is this model better than what went before? To answer that question, AI researchers rely on batteries of benchmarks, or tests to measure and assess a new model’s capabilities. Benchmark scores can make or break a model.

But there are tens of thousands of benchmarks spread across several datasets. Which one should developers use, and are all of equal worth?

Destructured Drug Discovery: How Sequence-Based AI Speeds and Expands the Search for New Therapeutics

Predictive computational methods for drug discovery have typically relied on models that incorporate three-dimensional information about protein structure. But these modeling methods face limitations due to high computational costs, expensive training data, and inability to fully capture protein dynamics.

Ainnocence develops predictive AI models based on target protein sequence. By bypassing 3D structural information entirely, sequence-based AI models can screen billions of drug candidates in hours or days. Ainnocence uses amino acid sequence data from target proteins and wet lab data to predict drug binding and other biological effects. They have demonstrated success in discovering COVID-19 antibodies and their platform can be used to discover other biomolecules, small molecules, cell therapies, and mRNA vaccines.

Medra Raises $52 Million to Speed Drug Discovery With AI Robots

Medra, which programs robots with artificial intelligence to conduct and improve biological experiments, has raised $52 million to build what it says will be one of the largest autonomous labs in the United States.

The deal brings Medra’s total funding to $63 million, including pre-seed and seed financing. Existing investor Human Capital led the new round, which came together just weeks after the company started talking publicly about its work in September, Chief Executive Officer Michelle Lee said in an interview at the company’s San Francisco lab. The company recently signed an agreement to work on early drug discovery with Genentech, a subsidiary of pharmaceutical giant Roche Holding AG.

Scale of living things

Neal Agarwal published another gift to the internet with Size of Life. It shows the scale of living things, starting with DNA, to hemoglobin, and keeps going up.

The scientific illustrations are hand-drawn (without AI) by Julius Csotonyi. Sound & FX by Aleix Ramon and cello music by Iratxe Ibaibarriaga calm the mind and encourage a slow observation of things, but also grow in complexity and weight with the scale. It kind of feels like a meditation exercise.

See also: shrinking to an atom, the speed of light, and of course the classic Powers of Ten.

Scientists just uncovered a major limitation in how AI models understand truth and belief

A new study has found that artificial intelligence systems struggle to distinguish between objective facts and subjective beliefs. This limitation poses risks as these technologies enter high-stakes fields like medicine and law.

The rhythm of swarms: Tunable particles synchronize movement like living organisms

A collaboration between the University of Konstanz and Forschungszentrum Jülich has achieved the first fully tunable experimental realization of a long predicted “swarmalator” system. The study, published in Nature Communications, shows how tiny, self-propelled particles can simultaneously coordinate their motion and synchronize their internal rhythms—a behavior reminiscent of flashing fireflies, Japanese tree frogs or schooling fish.

The results underline how collective dynamics can arise from simple interactions, without overarching leadership or control. Possible applications include autonomous robotic swarms.

Swarmalators—short for swarming oscillators—are systems in which each individual not only moves but also oscillates, with motion and rhythm influencing one another.

Quantum machine learning nears practicality as partial error correction reduces hardware demands

Imagine a future where quantum computers supercharge machine learning—training models in seconds, extracting insights from massive datasets and powering next-gen AI. That future might be closer than you think, thanks to a breakthrough from researchers at Australia’s national research agency, CSIRO, and The University of Melbourne.

Until now, one big roadblock stood in the way: errors. Quantum processors are noisy, and quantum machine learning (QML) models need deep circuits with hundreds of gates. Even tiny errors pile up fast, wrecking accuracy. The usual fix—quantum error correction—may work, but it’s expensive. We’re talking millions of qubits just to run one model. That’s way beyond today’s hardware.

So, what’s the game-changer? The team discovered that you don’t need to correct everything.

Breakthrough uses artificial intelligence to identify different brain cells in action

A decades-old challenge in neuroscience has been solved by harnessing artificial intelligence (AI) to identify the electrical signatures of different types of brain cells for the first time, as part of a study in mice led by researchers from UCL.

AI Guides Robot on the ISS for the First Time

Dr. Somrita Banerjee: “This is the first time AI has been used to help control a robot on the ISS. It shows that robots can move faster and more efficiently without sacrificing safety, which is essential for future missions where humans won’t always be able to guide them.”


How can an AI robot help improve human space exploration? This is what a recent study presented at the 2025 International Conference on Space Robotics hopes to address as a team of researchers investigated new methods for enhancing AI robots in space. This study has the potential to help scientists develop new methods for enhancing human-robotic relationships, specifically as humanity begins settling on the Moon and eventually Mars.

For the study, the researchers examined how a technique called machine learning-based warm starts could be used to improve robot autonomy. To accomplish this, the researchers launched the Astrobee free-flying robot to the International Space Station (ISS), where its algorithm was tested floating around the ISS in microgravity. The goal of the study was to ascertain if Astrobee could navigate its way around the ISS without the need for human intervention, relying only on its algorithm to determine safely traversing the ISS. In the end, the researchers found that Astrobee successfully navigated the tight terrain of the ISS with limited need for human intervention.

/* */