Toggle light / dark theme

2025 Nobel Prize in Physics Peer Review

Introduction.

Grounded in the scientific method, it critically examines the work’s methodology, empirical validity, broader implications, and opportunities for advancement, aiming to foster deeper understanding and iterative progress in quantum technologies. ## Executive Summary.

This work, based on experiments conducted in 1984–1985, addresses a fundamental question in quantum physics: the scale at which quantum effects persist in macroscopic systems.

By engineering a Josephson junction-based circuit where billions of Cooper pairs behave collectively as a single quantum entity, the laureates provided empirical evidence that quantum phenomena like tunneling through energy barriers and discrete energy levels can manifest in human-scale devices.

This breakthrough bridges microscopic quantum mechanics with macroscopic engineering, laying foundational groundwork for advancements in quantum technologies such as quantum computing, cryptography, and sensors.

Overall strengths include rigorous experimental validation and profound implications for quantum information science, though gaps exist in scalability to room-temperature applications and full mitigation of environmental decoherence.

Framed within the broader context, this award highlights the enduring evolution of quantum mechanics from theoretical curiosity to practical innovation, building on prior Nobel-recognized discoveries like the Josephson effect (1973) and superconductivity mechanisms (1972).

Topsicle: a method for estimating telomere length from whole genome long-read sequencing data

Long read sequencing technology (advanced by Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (Nanopore)) is revolutionizing the genomics field [43] and it has major potential to be a powerful computational tool for investigating the telomere length variation within populations and between species. Read length from long read sequencing platforms is orders of magnitude longer than short read sequencing platforms (tens of kilobase pairs versus 100–300 bp). These long reads have greatly aided in resolving the complex and highly repetitive regions of the genome [44], and near gapless genome assemblies (also known as telomere-to-telomere assembly) are generated for multiple organisms [45, 46]. The long read sequences can also be used for estimating telomere length, since whole genome sequencing using a long read sequencing platform would contain reads that span the entire telomere and subtelomere region. Computational methods can then be developed to determine the telomere–subtelomere boundary and use it to estimate the telomere length. As an example, telomere-to-telomere assemblies have been used for estimating telomere length by analyzing the sequences at the start and end of the gapless chromosome assembly [47,48,49,50]. But generating gapless genome assemblies is resource intensive and cannot be used for estimating the telomeres of multiple individuals. Alternatively, methods such as TLD [51], Telogator [52], and TeloNum [53] analyze raw long read sequences to estimate telomere lengths. These methods require a known telomere repeat sequence but this can be determined through k-mer based analysis [54]. Specialized methods have also been developed to concentrate long reads originating from chromosome ends. These methods involve attaching sequencing adapters that are complementary to the single-stranded 3′ G-overhang of the telomere, which can subsequently be used for selectively amplifying the chromosome ends for long read sequencing [55,56,57,58]. While these methods can enrich telomeric long reads, they require optimization of the protocol (e.g., designing the adapter sequence to target the G-overhang) and organisms with naturally blunt-ended telomeres [59, 60] would have difficulty implementing the methods.

An explosion of long read sequencing data has been generated for many organisms across the animal and plant kingdom [61, 62]. A computational method that can use this abundant long read sequencing data and estimate telomere length with minimal requirements can be a powerful toolkit for investigating the biology of telomere length variation. But so far, such a method is not available, and implementing one would require addressing two major algorithmic considerations before it can be widely used across many different organisms. The first algorithmic consideration is the ability to analyze the diverse telomere sequence variation across the tree of life. All vertebrates have an identical telomere repeat motif TTAGGG [63] and most previous long read sequencing based computational methods were largely designed for analyzing human genomic datasets where the algorithms are optimized on the TTAGGG telomere motif. But the telomere repeat motif is highly diverse across the animal and plant kingdom [64,65,66,67], and there are even species in fungi and plants that utilize a mix of repeat motifs, resulting in a sequence complex telomere structure [64, 68, 69]. A new computational method would need to accommodate the diverse telomere repeat motifs, especially across the inherently noisy and error-prone long read sequencing data [70]. With recent improvements in sequencing chemistry and technology (HiFi sequencing for PacBio and Q20 + Chemistry kit for Nanopore) error rates have been substantially reduced to 1% [71, 72]. But even with this low error rate, a telomeric region that is several kilobase pairs long can harbor substantial erroneous sequences across the read [73] and hinder the identification of the correct telomere–subtelomere boundary. In addition, long read sequencers are especially error-prone to repetitive homopolymer sequences [74,75,76], and the GT-rich microsatellite telomere sequences are predicted to be an especially erroneous region for long read sequencing. A second algorithmic consideration relates to identifying the telomere–subtelomere boundary. Prior long read sequencing based methods [51, 52] have used sliding windows to calculate summary statistics and a threshold to determine the boundary between the telomere and subtelomere. Sliding window and threshold based analyses are commonly used in genome analysis, but they place the burden on the user to determine the appropriate cutoff, which for telomere length measuring computational methods may differ depending on the sequenced organism. In addition, threshold based sliding window scans can inflate both false positive and false negative results [77,78,79,80,81,82] if the cutoff is improperly determined.

Here, we introduce Topsicle, a computational method that uses a novel strategy to estimate telomere lengths from raw long read sequences from the entire whole genome sequencing library. Methodologically, Topsicle iterates through different substring sizes of the telomere repeat sequence (i.e., telomere k-mer) and different phases of the telomere k-mer are used to summarize the telomere repeat content of each sequencing read. The k-mer based summary statistics of telomere repeats are then used for selecting long reads originating from telomeric regions. Topsicle uses those putative reads from the telomere region to estimate the telomere length by determining the telomere–subtelomere boundary through a binary segmentation change point detection analysis [83]. We demonstrate the high accuracy of Topsicle through simulations and apply our new method on long read sequencing datasets from three evolutionarily diverse plant species (A. thaliana, maize, and Mimulus) and human cancer cell lines. We believe using Topsicle will enable high-resolution explorations of telomere length for more species and achieve a broad understanding of the genetics and evolution underlying telomere length variation.

Researchers develop the first miniaturized ultraviolet spectrometer chip

Recently, the iGaN Laboratory led by Professor Haiding Sun at the School of Microelectronics, University of Science and Technology of China (USTC), together with the team of academician Sheng Liu from Wuhan University, has successfully developed the world’s first miniaturized ultraviolet (UV) spectrometer chip and realized on-chip spectral imaging.

Based on a novel gallium nitride (GaN) cascaded photodiode architecture and integrated with (DNN) algorithms, the device achieves high-precision spectral detection and high-resolution multispectral imaging.

With a response speed on the nanosecond scale, it sets a new world record for the fastest reported miniaturized spectrometer. The work, titled “A miniaturized cascaded-diode-array spectral imager,” was published online in Nature Photonics on September 26, 2025.

Matter wave

Schrödinger applied Hamilton’s optico-mechanical analogy to develop his wave mechanics for subatomic particles. [ 67 ] : xi Consequently, wave solutions to the Schrödinger equation share many properties with results of light wave optics. In particular, Kirchhoff’s diffraction formula works well for electron optics [ 29 ] : 745 and for atomic optics. [ 68 ] The approximation works well as long as the electric fields change more slowly than the de Broglie wavelength. Macroscopic apparatus fulfill this condition; slow electrons moving in solids do not.

Cracking a long-standing weakness in a classic algorithm for programming reconfigurable chips

Researchers from EPFL, AMD, and the University of Novi Sad have uncovered a long-standing inefficiency in the algorithm that programs millions of reconfigurable chips used worldwide, a discovery that could reshape how future generations of these are designed and programmed.

Many industries, including telecoms, automotive, aerospace and rely on a special breed of chip called the Field-Programmable Gate Array (FPGA). Unlike traditional chips, FPGAs can be reconfigured almost endlessly, making them invaluable in fast-moving fields where designing a custom chip would take years and cost a fortune. But this flexibility comes with a catch: FPGA efficiency depends heavily on the software used to program them.

Since the late 1990s, an algorithm known as PathFinder has been the backbone of FPGA routing. Its job: connecting thousands of tiny circuit components without creating overlaps.

View a PDF of the paper titled Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play, by Qinsi Wang and 8 other authors

Although reinforcement learning (RL) can effectively enhance the reasoning capabilities of vision-language models (VLMs), current methods remain heavily dependent on labor-intensive datasets that require extensive manual construction and verification, leading to extremely high training costs and consequently constraining the practical deployment of VLMs. To address this challenge, we propose Vision-Zero, a domain-agnostic framework enabling VLM self-improvement through competitive visual games generated from arbitrary image pairs. Specifically, Vision-Zero encompasses three main attributes: Strategic Self-Play Framework: Vision-Zero trains VLMs in “Who Is the Spy”-style games, where the models engage in strategic reasoning and actions across multiple roles. Through interactive gameplay, models autonomously generate their training data without human annotation. Gameplay from Arbitrary Images: Unlike existing gamified frameworks, Vision-Zero can generate games from arbitrary images, thereby enhancing the model’s reasoning ability across diverse domains and showing strong generalization to different tasks. We demonstrate this versatility using three distinct types of image datasets: CLEVR-based synthetic scenes, charts, and real-world images. Sustainable Performance Gain: We introduce Iterative Self-Play Policy Optimization (Iterative-SPO), a novel training algorithm that alternates between Self-Play and reinforcement learning with verifiable rewards (RLVR), mitigating the performance plateau often seen in self-play-only training and achieving sustained long-term improvements. Despite using label-free data, Vision-Zero achieves state-of-the-art performance on reasoning, chart question answering, and vision-centric understanding tasks, surpassing other annotation-based methods. Models and code has been released at https://github.com/wangqinsi1/Vision-Zero.

AI techniques excel at solving complex equations in physics, especially inverse problems

Differential equations are fundamental tools in physics: they are used to describe phenomena ranging from fluid dynamics to general relativity. But when these equations become stiff (i.e. they involve very different scales or highly sensitive parameters), they become extremely difficult to solve. This is especially relevant in inverse problems, where scientists try to deduce unknown physical laws from observed data.

To tackle this challenge, the researchers have enhanced the capabilities of Physics-Informed Neural Networks (PINNs), a type of artificial intelligence that incorporates physical laws into its .

Their approach, reported in Communications Physics, combines two innovative techniques: Multi-Head (MH) training, which allows the neural network to learn a general space of solutions for a family of equations—rather than just one specific case—and Unimodular Regularization (UR), inspired by concepts from differential geometry and , which stabilizes the learning process and improves the network’s ability to generalize to new, more difficult problems.

AI tensor network-based computational framework cracks a 100-year-old physics challenge

Researchers from The University of New Mexico and Los Alamos National Laboratory have developed a novel computational framework that addresses a longstanding challenge in statistical physics.

The Tensors for High-dimensional Object Representation (THOR) AI framework employs tensor network algorithms to efficiently compress and evaluate the extremely large configurational integrals and central to determining the thermodynamic and mechanical properties of materials.

The framework was integrated with machine learning potentials, which encode interatomic interactions and dynamical behavior, enabling accurate and scalable modeling of materials across diverse physical conditions.

Compact camera uses 25 color channels for high-speed, high-definition hyperspectral video

A traditional digital camera splits an image into three channels—red, green and blue—mirroring how the human eye perceives color. But those are just three discrete points along a continuous spectrum of wavelengths. Specialized “spectral” cameras go further by sequentially capturing dozens, or even hundreds, of these divisions across the spectrum.

This process is slow, however, meaning that hyperspectral cameras can only take still images, or videos with very low frame rates, or frames per second (fps). But what if a high-fps video camera could capture dozens of wavelengths at once, revealing details invisible to the naked eye?

Now, researchers at the University of Utah’s John and Marcia Price College of Engineering have developed a new way of taking a high-definition snapshot that encodes spectral data into images, much like a traditional camera encodes color. Instead of a filter that divides light into three color channels, their specialized filter divides it into 25. Each pixel stores compressed spectral information along with its , which computer algorithms can later reconstruct into a “cube” of 25 separate images—each representing a distinct slice of the visible spectrum.

/* */