Toggle light / dark theme

AI accelerates development of advanced heat-dissipating polymers

A machine learning method developed by researchers from the Institute of Science Tokyo, the Institute of Statistical Mathematics, and other institutions accurately predicts liquid crystallinity of polymers with 96% accuracy. They screened over 115,000 polyimides and selected six candidates with a high probability of exhibiting liquid crystallinity. Upon successful synthesis and experimental analyses, these liquid crystalline polyimides demonstrated thermal conductivities up to 1.26 W m⁻¹ K⁻¹, accelerating the discovery of efficient thermal materials for next-generation electronics.

Finding new polymer materials that can efficiently dissipate heat while maintaining high reliability is one of the biggest challenges in modern electronics. One promising solution is liquid crystalline polyimides, a special class of polymers whose molecules naturally align into highly ordered structures.

These ordered chains create pathways for heat flow, making liquid crystalline polyimides highly attractive for thermal management in semiconductors, flexible displays, and next-generation devices. However, designing these polymers has long relied on trial and error because researchers lacked clear design rules to predict whether a polymer would form a liquid crystalline phase.

Computers reconstruct 3D environments from 2D photos in a fraction of the time

Imagine trying to make an accurate three-dimensional model of a building using only pictures taken from different angles—but you’re not sure where or how far away all the cameras were. Our big human brains can fill in a lot of those details, but computers have a much harder time doing so.

This scenario is a well-known problem in and robot navigation systems. Robots, for instance, must take in lots of 2D information and make 3D —collections of data points in 3D space—in order to interpret a scene. But the mathematics involved in this process is challenging and error-prone, with many ways for the computer to incorrectly estimate distances. It’s also slow, because it forces the computer to create its 3D point cloud bit by bit.

Computer scientists at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) think they have a better method: A breakthrough algorithm that lets computers reconstruct high-quality 3D scenes from 2D images much more quickly than existing methods.

From Algebra to Cosmology: Stephen Wolfram on Physics & the Nature of the Universe

Physicist and computer scientist Stephen Wolfram explores how simple rules can generate complex realities, offering a bold new vision of fundamental physics and the structure of the universe.

Stephen Wolfram is a British-American computer scientist, physicist, and businessman. He is known for his work in computer algebra and theoretical physics. In 2012, he was named a fellow of the American Mathematical Society. He is the founder and CEO of the software company Wolfram Research, where he works as chief designer of Mathematica and the Wolfram Alpha answer engine.

Watch more CTT Chats here: https://t.ly/jJI7e

‘Neglected’ particles that could rescue quantum computing

One of the most promising approaches to overcoming this challenge is topological quantum computing, which aims to protect quantum information by encoding it in the geometric properties of exotic particles called anyons. These particles, predicted to exist in certain two-dimensional materials, are expected to be far more resistant to noise and interference than conventional qubits.

“Among the leading candidates for building such a computer are Ising anyons, which are already being intensely investigated in condensed matter labs due to their potential realization in exotic systems like the fractional quantum Hall state and topological superconductors,” said Aaron Lauda, professor of mathematics, physics and astronomy at the USC Dornsife College of Letters, Arts and Sciences and the study’s senior author. “On their own, Ising anyons can’t perform all the operations needed for a general-purpose quantum computer. The computations they support rely on ‘braiding,’ physically moving anyons around one another to carry out quantum logic. For Ising anyons, this braiding only enables a limited set of operations known as Clifford gates, which fall short of the full power required for universal quantum computing.”

But in a new study published in Nature Communications, a team of mathematicians and physicists led by USC researchers has demonstrated a surprising workaround. By adding a single new type of anyon, which was previously discarded in traditional approaches to topological quantum computation, the team shows that Ising anyons can be made universal, capable of performing any quantum computation through braiding alone. The team dubbed these rescued particles neglectons, a name that reflects both their overlooked status and their newfound importance. This new anyon emerges naturally from a broader mathematical framework and provides exactly the missing ingredient needed to complete the computational toolkit.

Quantum framework offers new approach to analyzing complex network data

Whenever we mull over what film to watch on Netflix, or deliberate between different products on an e-commerce platform, the gears of recommendation algorithms spin under the hood. These systems sort through sprawling datasets to deliver personalized suggestions. However, as data becomes richer and more interconnected, today’s algorithms struggle to keep pace with capturing relationships that span more than just pairs, such as group ratings, cross-category tags, or interactions shaped by time and context.

A team of researchers led by Professor Kavan Modi from the Singapore University of Technology and Design (SUTD) has taken a conceptual leap into this complexity by developing a new quantum framework for analyzing higher-order network data.

Their work centers on a mathematical field called topological signal processing (TSP), which encodes more than connections between pairs of points but also among triplets, quadruplets, and beyond. Here, “signals” are information that lives on higher-dimensional shapes (triangles or tetrahedra) embedded in a network.

Gaussian processes provide a new path toward quantum machine learning

Neural networks revolutionized machine learning for classical computers: self-driving cars, language translation and even artificial intelligence software were all made possible. It is no wonder, then, that researchers wanted to transfer this same power to quantum computers—but all attempts to do so brought unforeseen problems.

Recently, however, a team at Los Alamos National Laboratory developed a new way to bring these same to quantum computers by leveraging something called the Gaussian process.

“Our goal for this project was to see if we could prove that genuine quantum Gaussian processes exist,” said Marco Cerezo, the Los Alamos team’s lead scientist. “Such a result would spur innovations and new forms of performing quantum .”

Life’s emergence from non-living matter found more complex than previously understood

A new study published in July 2025 tackles one of science’s most profound mysteries—how did life first emerge from nonliving matter on early Earth? Using cutting edge mathematical approaches, researcher Robert G. Endres from Imperial College London has developed a framework that suggests the spontaneous origin of life faces far greater challenges than previously understood.

A thermodynamic approach to machine learning: How optimal transport theory can improve generative models

Joint research led by Sosuke Ito of the University of Tokyo has shown that nonequilibrium thermodynamics, a branch of physics that deals with constantly changing systems, explains why optimal transport theory, a mathematical framework for the optimal change of distribution to reduce cost, makes generative models optimal. As nonequilibrium thermodynamics has yet to be fully leveraged in designing generative models, the discovery offers a novel thermodynamic approach to machine learning research. The findings were published in the journal Physical Review X.

Image generation has been improving in leaps and bounds over recent years: a video of a celebrity eating a bowl of spaghetti that represented the state of the art a couple of years ago would not even qualify as good today. The algorithms that power image generation are called diffusion models, and they contain randomness called “noise.”

During the training process, noise is introduced to the original data through diffusion dynamics. During the generation process, the model must eliminate the noise to generate new content from the noisy data. This is achieved by considering the time-reversed dynamics, as if playing the video in reverse. One piece of the art and science of building a model that produces high-quality content is specifying when and how much noise is added to the data.

/* */