Toggle light / dark theme

A Token of Our Imagination: The Invisible Economy Powering GenAI

Ever wonder what actually happens inside the AI after you hit “Enter”?

You type a prompt into your favorite generative AI, and within seconds, your screen fills with exactly what you asked for—whether it’s a quarterly report or a cinematic image of a cyberpunk golden retriever. It feels like absolute magic.

But behind that seamless curtain lies a bustling, microscopic economy running entirely on a digital currency you’ve probably heard of but might not fully understand: the token.

Most of us only ever see the input and the output. We don’t see the internal cash register ringing, the mathematical gymnastics, or the sprawling “assembly line” churning through billions of calculations.

What actually happens between the moment you hit send and the moment your final masterpiece appears? In my newest blog post, I peel back the curtain to trace the fascinating journey of an AI token.

I break down this invisible economy—from the “toll booth” of the input phase to the heavy lifting of the output phase—and show you exactly how the machine balances the books.


AI Is Now Improving Itself

In 1965, a mathematician who worked alongside Alan Turing wrote a single
paragraph that has haunted AI research ever since. He predicted that one
day, a machine would learn to improve itself, and that everything after
that point would change.

Sixty years later, that loop is starting to close.

In this video, we trace how AI got here: from I.J. Good’s 1965 prediction.
to AlphaGo Zero teaching itself Go in 72 hours, to AlphaEvolve cracking a
math problem that had stood unbeaten for 56 years, and then quietly
speeding up the training of the very model that runs it. We look at the
data behind the trend (autonomous AI task length is doubling every 7
months), the walls AI keeps running into (compute, data, energy), and what
the people building this technology are actually saying about how close
we are.

This video is an honest look at what \.

Small talk shapes big trends: Physics predicts how language patterns spread

A new model to predict how language changes over time has been developed by a statistical physicist at the University of Portsmouth. The model is a step towards understanding the “statistical physics of language,” a scientific theory which borrows ideas from the physics of interacting particles to explain how words, accents, and dialects spread, shift, and disappear across regions and generations, and how they might change in future. The research is published in the journal Physical Review E.

James Burridge, Professor of Probability and Statistical Physics, from the University’s School of Mathematics and Physics, said, Just as meteorologists use mathematical models to forecast tomorrow’s weather, the same kind of thinking can be applied to language.

Where you are affects how you speak and if you map how people use certain words, you see clear geographic patterns—just like a weather map. However, the physics of language is closer to crystals and magnets than the atmosphere.

Mathematical framework solves asteroid route planning exactly for first time

A new publication from Bielefeld University sets a benchmark in optimization research. Together with an international team, Professor Michael Römer from the Faculty of Business Administration and Economics has developed a mathematical framework that solves a complex problem from space logistics exactly for the first time: the optimal planning of a route to visit several asteroids under conditions that are as close to reality as possible. The study is published in the INFORMS Journal on Computing.

At the center of the research is the so-called Asteroid Routing Problem. It addresses the question: In what order should a spacecraft visit multiple asteroids if both travel time and fuel consumption are to be minimized? The challenge is that, unlike in classical routing problems, the travel time between destinations is constantly changing because all celestial bodies are in continuous motion.

The idea for the study originated in Bielefeld, sparked by a success in a competition organized by the European Space Agency (ESA). During a research stay in Bielefeld, lead author Isaac Rudich revisited the topic and, together with the team, developed a new solution approach.

What If The Universe Is Math?

PBS Member Stations rely on viewers like you. To support your local station, go to: http://to.pbs.org/DonateSPACE

Sign Up on Patreon to get access to the Space Time Discord!
/ pbsspacetime.

In his essay “The Unreasonable Effectiveness of Mathematics”, the physicist Eugine Wigner said that “the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious”. This statement was inspired by the observation that so many aspects of the physical world seem to be describable and predictable by mathematical equations to incredible precision especially as quantum phenomena. But quantum phenomena have no subjective qualities and have questionable physicality. They seem to be completely describable by only numbers, and their behavior precisely defined by equations. In a sense, the quantum world is made of math. So does that mean the universe is made of math too? If you believe the Mathematical Universe Hypothesis then yes. And so are you.

#space #universe #maths.

Check out the Space Time Merch Store.
https://www.pbsspacetime.com/shop.

Sign up for the mailing list to get episode notifications and hear special announcements!

AI tackles one of math’s most brutal problems: Inverse PDEs

Penn Engineers have developed a new way to use AI to solve inverse partial differential equations (PDEs), a particularly challenging class of mathematical problems with broad implications for understanding the natural world.

The advance, which the researchers call “Mollifier Layers,” could benefit fields as varied as genetics and weather forecasting, because inverse PDEs help scientists work backward from observable patterns to infer the hidden dynamics that produced them.

“Solving an inverse problem is like looking at ripples in a pond and working backward to figure out where the pebble fell,” says Vivek Shenoy, Eduardo D. Glandt President’s Distinguished Professor in Materials Science and Engineering (MSE) and senior author of a study published in Transactions on Machine Learning Research (TMLR), which will be presented at the Conference on Neural Information Processing Systems (NeurIPS 2026). “You can see the effects clearly, but the real challenge is inferring the hidden cause.”

A new way to understand the evolution of spacetime dynamics

The concept of spacetime, first described in Einstein’s theory of general relativity, has since been widely studied by many physicists worldwide. Spacetime is described mathematically as a four-dimensional (4D) continuum in which physical events occur, which merges three-dimensional (3D) space, with one-dimensional (1D) time.

This 4D continuum is known to continuously evolve following complex and intricate patterns that are governed by Einstein’s field equations; mathematical equations that describe how matter and energy shape spacetime. While various past theoretical studies explored the evolution of spacetime, identifying patterns that persist during its evolution has proved challenging so far.

Researchers at Adolfo Ibáñez University in Chile and Columbia University set out to explore the evolution of spacetime using ideas rooted in nonlinear electrodynamics, an area of physics that studies the behavior of electric and magnetic fields in complex materials.

Quantum-informed machine learning for predicting spatiotemporal chaos with practical quantum advantage

Ultimately, QIML proves that we don’t need a fully fault-tolerant quantum computer to see results. By using quantum processors to learn the complex “rules” of chaos, we can give classical computers the boost they need to make reliable, long-term predictions about the most turbulent environments in the natural world.


Modeling high-dimensional dynamical systems remains one of the most persistent challenges in computational science. Partial differential equations (PDEs) provide the mathematical backbone for describing a wide range of nonlinear, spatiotemporal processes across scientific and engineering domains (13). However, high-dimensional systems are notoriously sensitive to initial conditions and the floating-point numbers used to compute them (47), making it highly challenging to extract stable, predictive models from data. Modern machine learning (ML) techniques often struggle in this regime: While they may fit short-term trajectories, they fail to learn the invariant statistical properties that govern long-term system behavior. These challenges are compounded in high-dimensional settings, where data are highly nonlinear and contain complex multiscale spatiotemporal correlations.

ML has seen transformative success in domains such as large language models (8, 9), computer vision (10, 11), and weather forecasting (1215), and it is increasingly being adopted in scientific disciplines under the umbrella of scientific ML (16). In fluid mechanics, in particular, ML has been used to model complex flow phenomena, including wall modeling (17, 18), subgrid-scale turbulence (19, 20), and direct flow field generation (21, 22). Physics-informed neural networks (23, 24) attempt to inject domain knowledge into the learning process, yet even these models struggle with the long-term stability and generalization issues that high-dimensional dynamical systems demand. To address this, generative models such as generative adversarial networks (25) and operator-learning architectures such as DeepONet (26) and Fourier neural operators (FNO) (27) have been proposed. While neural operators offer discretization invariance and strong representational power for PDE-based systems, they still suffer from error accumulation and prediction divergence over long horizons, particularly in turbulent and other chaotic regimes (28, 29). Recent work, such as DySLIM (30), enhances stability by leveraging invariant statistical measures. However, these methods depend on estimating such measures from trajectory samples, which can be computationally intensive and inaccurate in all forms of chaotic systems, especially in high-dimensional cases. These limitations have prompted exploration into alternative computational paradigms. Quantum machine learning (QML) has emerged as a possible candidate due to its ability to represent and manipulate high-dimensional probability distributions in Hilbert space (31). Quantum circuits can exploit entanglement and interference to express rich, nonlocal statistical dependencies using fewer parameters than their promising counterparts, which makes them well suited for capturing invariant measures in high-dimensional dynamical systems, where long-range correlations and multimodal distributions frequently arise (32). QML and quantum-inspired ML have already demonstrated potential in fields such as quantum chemistry (33, 34), combinatorial optimization (35, 36), and generative modeling (37, 38). However, the field is constrained on two fronts: Fully quantum approaches are limited by noisy intermediate-scale quantum (NISQ) hardware noise and scalability (39), while quantum-inspired algorithms, being classical simulations, cannot natively leverage crucial quantum effects such as entanglement to efficiently represent the complex, nonlocal correlations found in such systems. These challenges limit the standalone utility of QML in scientific applications today. Instead, hybrid quantum-classical models provide a promising compromise, where quantum submodules work together with classical learning pipelines to improve expressivity, data efficiency, and physical fidelity. In quantum chemistry, this hybrid paradigm has proven feasible, notably through quantum mechanical/molecular mechanical coupling (40, 41), where classical force fields are augmented with quantum corrections. Within such frameworks, techniques such as quantum-selected configuration interaction (42) have been used to enhance accuracy while keeping the quantum resource requirements tractable. In the broader landscape of quantum computational fluid dynamics, progress has been made toward developing full quantum solvers for nonlinear PDEs. Recent works by Liu et al. (43) and Sanavio et al. (44, 45) have successfully applied Carleman linearization to the lattice Boltzmann equation, offering a promising pathway for simulating fluid flows at moderate Reynolds numbers. These approaches, typically using algorithms such as Harrow-Hassidim-Lloyd (HHL) (46), promise exponential speedups but generally necessitate deep circuits and fault-tolerant hardware.

Quantum-enhanced machine learning (QEML) combines the representational richness of quantum models with the scalability of classical learning. By leveraging uniquely quantum properties such as superposition and entanglement, QEML can explore richer feature spaces and capture complex correlations that are challenging for purely classical models. Recent successes in quantum-enhanced drug discovery (37), where hybrid quantum-classical generative models have produced experimentally validated candidates rivaling state-of-the-art classical methods, demonstrate the practical potential of QEML even before full quantum advantage is achieved. Despite these strengths, practical barriers remain. QEML pipelines require repeated quantum-classical communication during training and rely on costly quantum data-embedding and measurement steps, which slow computation and limit accessibility across research institutions.

/* */