Toggle light / dark theme

Helping AI agents search to get the best results out of large language models

Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence (AI) tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.

AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.

But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes.

New computer vision method links photos to floor plans with pixel-level accuracy

For people, matching what they see on the ground to a map is second nature. For computers, it has been a major challenge. A Cornell research team has introduced a new method that helps machines make these connections—an advance that could improve robotics, navigation systems, and 3D modeling.

The work, presented at the 2025 Conference on Neural Information Processing Systems and published on the arXiv preprint server, tackles a major weakness in today’s computer vision tools. Current systems perform well when comparing similar images, but they falter when the views differ dramatically, such as linking a street-level photo to a simple map or architectural drawing.

The new approach teaches machines to find pixel-level matches between a photo and a floor plan, even when the two look completely different. Kuan Wei Huang, a doctoral student in computer science, is the first author; the co-authors are Noah Snavely, a professor at Cornell Tech; Bharath Hariharan, an associate professor at the Cornell Ann S. Bowers College of Computing and Information Science; and undergraduate Brandon Li, a computer science student.

AI uncovers double-strangeness: A new double-Lambda hypernucleus

Researchers from the High Energy Nuclear Physics Laboratory at the RIKEN Pioneering Research Institute (PRI) in Japan and their international collaborators have made a discovery that bridges artificial intelligence and nuclear physics. By applying deep learning techniques to a vast amount of unexamined nuclear emulsion data from the J-PARC E07 experiment, the team identified, for the first time in 25 years, a new double-Lambda hypernucleus.

This marks the world’s first AI-assisted observation of such an exotic nucleus—an atomic nucleus containing two strange quarks. The finding, published in Nature Communications, represents a major advance in experimental nuclear physics and provides new insight into the composition of neutron star cores, one of the most extreme environments in the universe.

A new tool is revealing the invisible networks inside cancer

Spanish researchers have created a powerful new open-source tool that helps uncover the hidden genetic networks driving cancer. Called RNACOREX, the software can analyze thousands of molecular interactions at once, revealing how genes communicate inside tumors and how those signals relate to patient survival. Tested across 13 different cancer types using international data, the tool matches the predictive power of advanced AI systems—while offering something rare in modern analytics: clear, interpretable explanations that help scientists understand why tumors behave the way they do.

The World’s Strangest Computer Is Alive and It Blurs the Line Between Brains and Machines

At first glance, the idea sounds implausible: a computer made not of silicon, but of living brain cells. It’s the kind of concept that seems better suited to science fiction than to a laboratory bench. And yet, in a few research labs around the world, scientists are already experimenting with computers that incorporate living human neurons. Such computers are now being trained to perform complex tasks such as play games and even drive robots.

These systems are built from brain organoids: tiny, lab-grown clusters of human neurons derived from stem cells. Though often nicknamed “mini-brains,” they are not thinking minds or conscious entities. Instead, they are simplified neural networks that can be interfaced with electronics, allowing researchers to study how living neurons process information when placed in a computational loop.

In fact, some researchers even claim that these tools are pushing the frontiers of medicine, along with those of computing. Dr. Ramon Velaquez, a neuroscientist from Arizona State University, is one such researcher.

New microchips mimic human nerves to boost speed and cut power waste

At the same time, estimates from the US indicate that power consumption from IT applications has doubled over the past eight years, with the rise of AI. Researchers from California’s Lawrence Berkeley National Laboratory suggest that more than half of the electricity used by data centers will be used solely for AI by 2028.

This puts the rapid advance of the digital revolution at risk as energy demand can no longer be met. Traditional silicon chips, which draw power even when idle, are becoming a critical limitation. As a result, researchers worldwide are exploring alternative microelectronic technologies that are far more energy-efficient.

To address the challenge, the team will begin developing superconducting circuits on January 1. These circuits, which were first envisioned by Hungarian-American mathematician and physicist John von Neumann in the 1950s, exploit quantum effects to transmit data using extremely short voltage pulses.

Artificial Intelligence for Organelle Segmentation in Live-Cell Imaging

JUST PUBLISHED: artificial intelligence for organelle segmentation in live-cell imaging

Click here to read the latest free, Open Access article from Research, a Science Partner Journal.


Investigations into organelles illuminate the intricate interplay of cellular systems, uncovering how specialized structures orchestrate homeostasis, regulate metabolic pathways, and modulate signal transduction. The structural and functional integrity of organelles, including mitochondria, ER, GA, and lysosomes, is critical for cellular health. Deviations in organelle shape and behavior are frequently associated with disease development [51]. Consequently, precise characterization of organelles is crucial for advancing our understanding of cell biology and mechanisms.

Organelle image segmentation is important for extracting precise spatial and structural information, forming the foundation for subsequent quantitative analyses. Unlike whole-cell or nuclear, organelle segmentation is inherently more challenging due to the smaller size, irregular shapes, and intricate distributions of these structures. Additionally, many organelles exhibit dynamic behaviors such as fusion, fission, and trafficking, requiring accurate segmentation across both temporal and spatial dimensions. Advances in segmentation technologies have notably improved the ability to identify and characterize organelles with high-precision accuracy, opening new avenues for understanding cellular functions in health and disease.

/* */