This is a ~50-minute talk titled “Substrate-dependent mathematics hypothesis” by Olaf Witkowski (https://olafwitkowski.com/), presented for our Platonic Space symposium (https://thoughtforms.life/symposium-on-the-platonic-space/).
Category: mathematics

Physicist proves unsolvability beyond one dimension for quantum Ising models
By extending a proof of a physically important behavior in one-dimensional quantum spin systems to higher dimensions, a RIKEN physicist has shown in a new study that the model lacks exact solutions. The research is published in the journal Physical Review B.
Theoretical physicists develop mathematical models to describe material systems, which they can then use to make predictions about how materials will behave.
One of the most important models is the Ising model, which was first developed about a century ago to model magnetic materials such as iron and nickel.
Light-Powered AI Chips: The Photonic Revolution That’s About to Change Everything
Light-Powered AI Chips: The Photonic Revolution That’s About to Change Everything ## The future of artificial intelligence (AI) may be revolutionized by photonic AI chips that use light instead of electricity to process information, enabling faster, more efficient, and heat-free computing.
## Questions to inspire discussion.
Photonic AI Technology.
🔬 Q: What makes photonic AI chips more efficient than current AI chips? A: Photonic AI chips are 100x more energy efficient and produce virtually zero heat compared to electronic chips, as they use light instead of electrons for computation.
🌈 Q: How do photonic chips encode information differently? A: Photonic chips can encode information simultaneously in wavelength, amplitude, and phase by bouncing light off mirrors and optical devices, replacing traditional electronic processors.
Industry Developments.

Google DeepMind discovers new solutions to century-old problems in fluid dynamics
For centuries, mathematicians have developed complex equations to describe the fundamental physics involved in fluid dynamics. These laws govern everything from the swirling vortex of a hurricane to airflow lifting an airplane’s wing.
Experts can carefully craft scenarios that make theory go against practice, leading to situations which could never physically happen. These situations, such as when quantities like velocity or pressure become infinite, are called ‘singularities’ or ‘blow ups’. They help mathematicians identify fundamental limitations in the equations of fluid dynamics, and help improve our understanding of how the physical world functions.
In a new paper, we introduce an entirely new family of mathematical blow ups to some of the most complex equations that describe fluid motion. We’re publishing this work in collaboration with mathematicians and geophysicists from institutions including Brown University, New York University and Stanford University.


Gemini achieves gold-level performance at the International Collegiate Programming Contest World Finals
Gemini 2.5 Deep Think achieves breakthrough performance at the world’s most prestigious computer programming competition, demonstrating a profound leap in abstract problem solving.
An advanced version of Gemini 2.5 Deep Think has achieved gold-medal level performance at the 2025 International Collegiate Programming Contest (ICPC) World Finals.
This milestone builds directly on Gemini 2.5 Deep Think’s gold-medal win at the International Mathematical Olympiad (IMO) just two months ago. Innovations from these efforts will continue to be integrated into future versions of Gemini Deep Think, expanding the frontier of advanced AI capabilities accessible to students and researchers.

Neuromorphic Intelligence Leverages Dynamical Systems Theory To Model Inference And Learning In Sustainable, Adaptable Systems
The pursuit of artificial intelligence increasingly focuses on replicating the efficiency and adaptability of the human brain, and a new approach, termed neuromorphic intelligence, offers a promising path forward. Marcel van Gerven from Radboud University and colleagues demonstrate how brain-inspired systems can achieve significantly greater energy efficiency than conventional digital computers. This research establishes a unifying theoretical framework, rooted in dynamical systems theory, to integrate insights from diverse fields including neuroscience, physics, and artificial intelligence. By harnessing noise as a learning resource and employing differential genetic programming, the team advances the development of truly adaptive and sustainable artificial intelligence, paving the way for emergent intelligence arising directly from physical substrates.
Researchers demonstrate that applying dynamical systems theory, a mathematical framework describing change over time, to artificial intelligence enables the creation of more sustainable and adaptable systems by harnessing noise as a learning tool and allowing intelligence to emerge from the physical properties of the system itself.
Doing The Math On CPU-Native AI Inference
A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). The need for math engines specifically designed to support machine learning algorithms, particularly for inference workloads but also for certain kinds of training, has been covered extensively here at The Next Platform.
Just to rattle off a few of them, consider the impending “Cirrus” Power10 processor from IBM, which is due in a matter of days from Big Blue in its high-end NUMA machines and which has a new matrix math engine aimed at accelerating machine learning. Or IBM’s “Telum” z16 mainframe processor coming next year, which was unveiled at the recent Hot Chips conference and which has a dedicated mixed precision matrix math core for the CPU cores to share. Intel is adding its Advanced Matrix Extensions (AMX) to its future “Sapphire Rapids” Xeon SP processors, which should have been here by now but which have been pushed out to early next year. Arm Holdings has created future Arm core designs, the “Zeus” V1 core and the “Perseus” N2 core, that will have substantially wider vector engines that support the mixed precision math commonly used for machine learning inference, too. Ditto for the vector engines in the “Milan” Epyc 7,003 processors from AMD.
All of these chips are designed to keep inference on the CPUs, where in a lot of cases it belongs because of data security, data compliance, and application latency reasons.


Machine learning unravels quantum atomic vibrations in materials
Caltech scientists have developed an artificial intelligence (AI)–based method that dramatically speeds up calculations of the quantum interactions that take place in materials. In new work, the group focuses on interactions among atomic vibrations, or phonons—interactions that govern a wide range of material properties, including heat transport, thermal expansion, and phase transitions. The new machine learning approach could be extended to compute all quantum interactions, potentially enabling encyclopedic knowledge about how particles and excitations behave in materials.
Scientists like Marco Bernardi, professor of applied physics, physics, and materials science at Caltech, and his graduate student Yao Luo (MS ‘24) have been trying to find ways to speed up the gargantuan calculations required to understand such particle interactions from first principles in real materials—that is, beginning with only a material’s atomic structure and the laws of quantum mechanics.
Last year, Bernardi and Luo developed a data-driven method based on a technique called singular value decomposition (SVD) to simplify the enormous mathematical matrices scientists use to represent the interactions between electrons and phonons in a material.