Toggle light / dark theme

Enzyme as Maxwell’s Demon: Steady-State Deviation from Chemical Equilibrium by Enhanced Enzyme Diffusion

NoteL This is elegant theoretical physics showing an intriguing possibility, not a confirmed biological mechanism. It’s a “what if” scenario that could change how we view enzymes, but only if the controversial premise (EED) turns out to be real.


Enhanced enzyme diffusion (EED), in which the diffusion coefficient of an enzyme transiently increases during catalysis, has been extensively reported experimentally, although its existence remains under debate. In this Letter, we investigate what macroscopic consequences would arise if EED exists. Through numerical simulations and theoretical analysis, we demonstrate that such enzymes can act as Maxwell’s demons: They use their enhanced diffusion as a memory of the previous catalytic reaction, to gain information and drive steady-state chemical concentrations away from chemical equilibrium. Our theoretical analysis identifies the conditions under which this process could operate and discusses its possible biological relevance.

What babies can teach AI

Researchers at Google DeepMind tried to teach an AI system to have that same sense of “intuitive physics” by training a model that learns how things move by focusing on objects in videos instead of individual pixels. They trained the model on hundreds of thousands of videos to learn how an object behaves. If babies are surprised by something like a ball suddenly flying out of the window, the theory goes, it is because the object is moving in a way that violates the baby’s understanding of physics. The researchers at Google DeepMind managed to get their AI system, too, to show “surprise” when an object moved differently from the way it had learned that objects move.

Yann LeCun, a Turing Prize winner and Meta’s chief AI scientist, has argued that teaching AI systems to observe like children might be the way forward to more intelligent systems. He says humans have a simulation of the world, or a “world model,” in our brains, allowing us to know intuitively that the world is three-dimensional and that objects don’t actually disappear when they go out of view. It lets us predict where a bouncing ball or a speeding bike will be in a few seconds’ time. He’s busy building entirely new architectures for AI that take inspiration from how humans learn. We covered his big bet for the future of AI here.

The AI systems of today excel at narrow tasks, such as playing chess or generating text that sounds like something written by a human. But compared with the human brain—the most powerful machine we know of—these systems are brittle. They lack the sort of common sense that would allow them to operate seamlessly in a messy world, do more sophisticated reasoning, and be more helpful to humans. Studying how babies learn could help us unlock those abilities.

Experiment clarifies cosmic origin of rare proton-rich isotope selenium-74

Researchers have reported new experimental results addressing the origin of rare proton-rich isotopes heavier than iron, called p-nuclei. Led by Artemis Tsantiri, then-graduate student at the Facility for Rare Isotope Beams (FRIB) and current postdoctoral fellow at the University of Regina in Canada, the study presents the first rare isotope beam measurement of proton capture on arsenic-73 to produce selenium-74, providing new constraints on how the lightest p-nucleus is formed and destroyed in the cosmos.

The team published the results in Physical Review Letters in a paper titled “Constraining the Synthesis of the Lightest Nucleus 74 Se”. The work involved more than 45 participants from 20 institutions in the United States, Canada, and Europe.

A central question in nuclear astrophysics concerns how and where chemical elements are formed. The slow and rapid neutron-capture processes account for many intermediate-mass and heavy nuclei beyond iron through repeated neutron captures followed by radioactive decays until stable isotopes are reached.

J. Richard Gott — Why Did Our Universe Begin?

Make a donation to Closer To Truth to help us continue exploring the world’s deepest questions without the need for paywalls: https://shorturl.at/OnyRq.

That the universe began seems astonishing. What brought it about? What forces were involved? How did the laws of nature generate the vast expanse of billions of galaxies of billions of stars and planets in the structures that we see today? What new physics was involved? What more must we learn?

Free access to Closer to Truth’s library of 5,000 videos: http://bit.ly/376lkKN

Watch more interviews on how our universe began: https://bit.ly/3qmbWPu.

John Richard Gott III is a Professor of Astrophysical Sciences at Princeton University who is noted for his contributions to cosmology and general relativity.

Register for free at CTT.com for subscriber-only exclusives: http://bit.ly/2GXmFsP

Are your memories illusions? New study disentangles the Boltzmann brain paradox

In a recent paper, SFI Professor David Wolpert, SFI Fractal Faculty member Carlo Rovelli, and physicist Jordan Scharnhorst examine a longstanding, paradoxical thought experiment in statistical physics and cosmology known as the “Boltzmann brain” hypothesis—the possibility that our memories, perceptions, and observations could arise from random fluctuations in entropy rather than reflecting the universe’s actual past. The work is published in the journal Entropy.

The paradox arises from a tension at the heart of statistical physics. One of the central pillars of our understanding of the time-asymmetric second law of thermodynamics is Boltzmann’s H theorem, a fundamental concept in statistical mechanics. However, paradoxically, the H theorem is itself symmetric in time.

That time-symmetry implies that it is, formally speaking, far more likely for the structures of our memories, perceptions, and observations to arise from random fluctuations in the universe’s entropy than to represent genuine records of our actual external universe in the past. In other words, statistical physics seems to force us to conclude that our memories might be spurious—elaborate illusions produced by chance that tell us nothing about what we think they do. This is the Boltzmann brain hypothesis.

NASA supercomputer just predicted Earth’s hard limit for life

Scientists have used a NASA-grade supercomputer to push our planet to its limits, virtually fast‑forwarding the clock until complex organisms can no longer survive. The result is a hard upper bound on how long Earth can sustain breathable air and liquid oceans, and it is far less about sudden catastrophe than a slow suffocation driven by the Sun itself. The work turns a hazy, far‑future question into a specific timeline for the end of life as we know it.

Instead of fireballs or rogue asteroids, the simulations point to a world that quietly runs out of oxygen, with only hardy microbes clinging on before even they disappear. It is a stark reminder that Earth’s habitability is not permanent, yet it also stretches over such vast spans of time that our immediate crises still depend on choices made this century, not on the Sun’s distant evolution.

The new modeling effort starts from a simple premise: if I know how the Sun brightens over time and how Earth’s atmosphere responds, I can calculate when conditions for complex life finally fail. Researchers fed a high‑performance system with detailed physics of the atmosphere, oceans and carbon cycle, then let it run through hundreds of thousands of scenarios until the planet’s chemistry tipped past a critical point. One study describes a supercomputer simulation that projects life on Earth ending in roughly 1 billion years, once rising solar heat strips away most atmospheric oxygen.

A century’s worth of data could help predict future solar cycle activity

Research conducted by an international team of astronomers from Southwest Research Institute, Aryabhatta Research Institute of Observational Sciences in India and the Max Planck Institute in Germany could help predict upcoming solar cycle activity.

To enable these predictions, the team has devised a new way to look at historical data from the Kodaikanal Solar Observatory (KoSO), a field station of the Indian Institute of Astrophysics (IIA) Bangalore, to reconstruct the sun’s polar magnetic behavior over more than 100 years.

“We needed to find the polar magnetic information hidden in the historical data,” said SwRI scientist Dr. Bibhuti Kumar Jha, second author of a paper about these findings. “To start, we cleaned up and calibrated early data to today’s standards and then correlated patterns with modern observations. I addressed anomalies like time zone slips and rotation errors to enable this kind of study.”

Physicists employ AI labmates to supercharge LED light control

In 2023, a team of physicists from Sandia National Laboratories announced a major discovery: a way to steer LED light. If refined, it could mean someday replacing lasers with cheaper, smaller, more energy-efficient LEDs in countless technologies, from UPC scanners and holographic projectors to self-driving cars. The team assumed it would take years of meticulous experimentation to refine their technique.

Now the same researchers have reported that a trio of artificial intelligence labmates has improved their best results fourfold. It took about five hours.

The resulting paper, now published in Nature Communications, shows how AI is advancing beyond a mere automation tool toward becoming a powerful engine for clear, comprehensible scientific discovery.

Bridging theories across physics helps reconcile controversy about thin liquid layer on icy surfaces

The ice in a domestic freezer is remarkably different from the single crystals that form in snow clouds, or even those formed on a frozen pond. As temperatures drop, ice crystals can grow in a variety of shapes: from stocky hexagonal prisms to flat plates, to Grecian columns.

Why this structural roller coaster happens, though, is a mystery. When first observed, researchers thought it must relate to a hypothesis proposed by famed physicist Michael Faraday—ice below its melting point has a microscopically thin liquid layer of water across its surface.

This “premelting film” of ice, however, is the subject of significant scientific controversy. For years, researchers have provided contradictory evidence about its thickness and whether it even exists.

/* */