Toggle light / dark theme

Rules that Reality Plays By — Dr. Stephen Wolfram, DemystifySci #343

Stephen Wolfram is a physicist, mathematician, and programmer who believes he has discovered the computational rules that organize the universe at the finest grain. These rules are not physical rules like the equations of state or Maxwell’s equations. According to Wolfram, these are rules that govern how the universe evolves and operates at a level at least one step down below the reality that we inhabit. His computational principles are inspired by the results observed in cellular automata systems, which show that it’s possible to take a very simple system, with very simple rules, and end up at complex patterns that often look organic and always look far more intricate than the black and white squares that the game started with. He believes that the hyperspace relationships that emerge when he applies a computational rule over and over again represent the nature of the universe — and that the relationships that emerge contain everything from the seed of human experience to the equations for relativity, evolution, and black holes. We sit down with him for a conversation about the platonic endeavor that he has undertaken, where to draw the line between lived experience and the computational universe, the limits of physics, and the value of purpose and the source of consciousness.

MAKE HISTORY WITH US THIS SUMMER:
https://demystifysci.com/demysticon-2025

PATREON
/ demystifysci.

PARADIGM DRIFT
https://demystifysci.com/paradigm-drift-show.

Material solutions to quantum spookiness: https://www.youtube.com/@MaterialAtomics.

00:00 Go!

The cost of thinking: Reasoning models share aspects of information processing with human brains

Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.

A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these—and remarkably, scientists at MIT’s McGovern Institute for Brain Research have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need to take their time with.

In other words, they report in the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.

We’re (Probably) Not Alone Out Here… — YouTube

Give the most meaningful Christmas gift ✨ Create a custom star map from Under Lucky Stars at http://UnderLuckyStars.com.

Why haven’t we heard from aliens? That’s a question that sounds simple but turns into a mess the moment you try to answer it. Recently, a mathematician tried to simplify the equation by trying to calculate the odds that we’re the only intelligent life in the universe – according to his math, we shouldn’t be. Let’s take a look.

Paper: https://www.sciencedirect.com/science… Check out my new quiz app ➜ http://quizwithit.com/ 📚 Buy my book ➜ https://amzn.to/3HSAWJW 💌 Support me on Donorbox ➜ https://donorbox.org/swtg 📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/ 👉 Transcript with links to references on Patreon ➜ / sabine 📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsle… 👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXl… 🔗 Join this channel to get access to perks ➜ / @sabinehossenfelder #science #sciencenews #aliens #maths.

🤓 Check out my new quiz app ➜ http://quizwithit.com/
📚 Buy my book ➜ https://amzn.to/3HSAWJW
💌 Support me on Donorbox ➜ https://donorbox.org/swtg.
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine.
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsle
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXl
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder.

#science #sciencenews #aliens #maths

Quantum ground states: Scalable counterdiabatic driving technique enables reliable and rapid preparation

Quantum ground states are the states at which quantum systems have the minimum possible energy. Quantum computers are increasingly being used to analyze the ground states of interesting systems, which could in turn inform the design of new materials, chemical compounds, pharmaceutical drugs and other valuable goods.

The reliable preparation of quantum ground states has been a long-standing goal within the physics research community. One quantum computing method to prepare ground states and other desired states is known as adiabatic state preparation.

This is a process that starts from an initial Hamiltonian, a mathematical operator that encodes a system’s total energy and for which the ground state is known, gradually changing it to reach a final Hamiltonian, which encodes the final ground state.

A New Bridge Links the Strange Math of Infinity to Computer Science

All of modern mathematics is built on the foundation of set theory, the study of how to organize abstract collections of objects. But in general, research mathematicians don’t need to think about it when they’re solving their problems. They can take it for granted that sets behave the way they’d expect, and carry on with their work.

Descriptive set theorists are an exception. This small community of mathematicians never stopped studying the fundamental nature of sets — particularly the strange infinite ones that other mathematicians ignore.

Their field just got a lot less lonely. In 2023, a mathematician named Anton Bernshteyn (opens a new tab) published a deep and surprising connection (opens a new tab) between the remote mathematical frontier of descriptive set theory and modern computer science.

Early experiments in accelerating science with GPT-5

Most strikingly, the paper claims four genuinely new mathematical results, carefully verified by the human mathematicians involved. In a discipline where truth is eternal and progress is measured in decades, an AI contributed novel insights that helped settle previously unsolved problems. The authors stress these contributions are “modest in scope but profound in implication”—not because they’re minor, but because they represent a proof of concept. If GPT-5 can do this now, what comes next?

The paper carries an undercurrent of urgency: many scientists still don’t realize what’s possible. The authors are essentially saying, “Look, this is already working for us—don’t get left behind.” Yet they avoid boosterism, emphasizing the technology’s current limitations as clearly as its strengths.


What we’re learning from collaborations with scientists.

Supercomputer simulates quantum chip in unprecedented detail

A broad association of researchers from across Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California, Berkeley have collaborated to perform an unprecedented simulation of a quantum microchip, a key step forward in perfecting the chips required for this next-generation technology. The simulation used more than 7,000 NVIDIA GPUs on the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy (DOE) user facility.

Modeling quantum chips allows researchers to understand their function and performance before they’re fabricated, ensuring that they work as intended and spotting any problems that might come up. Quantum Systems Accelerator (QSA) researchers Zhi Jackie Yao and Andy Nonaka of the Applied Mathematics and Computational Research (AMCR) Division at Berkeley Lab develop electromagnetic models to simulate these chips, a key step in the process of producing better quantum hardware.

“The predicts how design decisions affect electromagnetic wave propagation in the ,” said Nonaka, “to make sure proper signal coupling occurs and avoid unwanted crosstalk.”

AI at the speed of light just became a possibility

Researchers at Aalto University have demonstrated single-shot tensor computing at the speed of light, a remarkable step towards next-generation artificial general intelligence hardware powered by optical computation rather than electronics.

Tensor operations are the kind of arithmetic that form the backbone of nearly all modern technologies, especially , yet they extend beyond the simple math we’re familiar with. Imagine the mathematics behind rotating, slicing, or rearranging a Rubik’s cube along multiple dimensions. While humans and classical computers must perform these operations step by step, light can do them all at once.

Today, every task in AI, from image recognition to , relies on tensor operations. However, the explosion of data has pushed conventional digital computing platforms, such as GPUs, to their limits in terms of speed, scalability and energy consumption.

New Proofs Probe Soap-Film Singularities

It would take nearly a century for mathematicians to prove him right. In the early 1930s, Jesse Douglas and Tibor Radó independently showed that the answer to the “Plateau problem” is yes: For any closed curve (your wire frame) in three-dimensional space, you can always find a minimizing two-dimensional surface (your soap film) that has the same boundary. The proof later earned Douglas the first-ever Fields Medal.

Since then, mathematicians have expanded on the Plateau problem in hopes of learning more about minimizing surfaces. These surfaces appear throughout math and science — in proofs of important conjectures in geometry and topology, in the study of cells and black holes, and even in the design of biomolecules. “They’re very beautiful objects to study,” said Otis Chodosh (opens a new tab) of Stanford University. “Very natural, appealing and intriguing.”

Mathematicians now know that Plateau’s prediction is categorically true up through dimension seven. But in higher dimensions, there’s a caveat: The minimizing surfaces that form might not always be nice and smooth, like the disk or hourglass. Instead, they might fold, pinch or intersect themselves in places, forming what are known as singularities. When minimizing surfaces have singularities, it becomes much harder to understand and work with them.

/* */