Toggle light / dark theme

🌌 Unifying AI Through the Feynman Path Integral: From Deep Learning to Quantum AI I’m pleased to share a framework that brings many areas of AI into a single mathematical structure inspired by the Feynman path integral —

🌌 Unifying AI Through the Feynman Path Integral: From Deep Learning to Quantum AI https://lnkd.in/g4Cfv6qd I’m pleased to share a framework that brings many areas of AI into a single mathematical structure inspired by the Feynman path integral — a foundational idea in quantum physics. Instead of viewing supervised learning, reinforcement learning, generative models, and quantum machine learning as separate disciplines, this framework shows that they all follow the same underlying principle: Learning is a weighted sum over possible solutions (paths), based on how well each one explains the data. In other words, AI can be viewed the same way Feynman viewed physics: as summing over all possible configurations, weighted by an action functional.

Scientist Solves 100-Year-Old Physics Puzzle To Track Airborne Killers

Researchers at the University of Warwick have created a straightforward new way to predict how irregularly shaped nanoparticles, a harmful type of airborne pollutant, move through the air.

Each day, people inhale countless microscopic particles such as soot, dust, pollen, microplastics, viruses, and engineered nanoparticles. Many of these particles are so small that they can reach deep into the lungs and even pass into the bloodstream, where they may contribute to serious health problems including heart disease, stroke, and cancer.

While most airborne particles have uneven shapes, existing mathematical models often treat them as perfect spheres because that makes the equations easier to handle. This simplification limits scientists’ ability to accurately describe or track how real, non-spherical particles move, especially those that are more dangerous.

When speaking out feels risky: New study maps hidden dynamics of self-censorship

In an era when social media blurs the line between public and private speech, how do people decide whether to speak their minds or stay silent?

A new study from researchers at Arizona State University and the University of Michigan, published in the Proceedings of the National Academy of Sciences, offers a groundbreaking look at the strategic trade-offs individuals make when facing the threat of punishment for dissent.

The work, co-authored by Professor Stephanie Forrest and Assistant Professor Joshua J. Daymude in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at ASU, and Robert Axelrod from the University of Michigan, introduces a to explain when people choose to express dissent or self-censor.

What Is a Manifold?

Standing in the middle of a field, we can easily forget that we live on a round planet. We’re so small in comparison to the Earth that from our point of view, it looks flat.

The world is full of such shapes — ones that look flat to an ant living on them, even though they might have a more complicated global structure. Mathematicians call these shapes manifolds. Introduced by Bernhard Riemann in the mid-19th century, manifolds transformed how mathematicians think about space. It was no longer just a physical setting for other mathematical objects, but rather an abstract, well-defined object worth studying in its own right.

This new perspective allowed mathematicians to rigorously explore higher-dimensional spaces — leading to the birth of modern topology, a field dedicated to the study of mathematical spaces like manifolds. Manifolds have also come to occupy a central role in fields such as geometry, dynamical systems, data analysis and physics.

RRAM-based analog computing system rapidly solves matrix equations with high precision

Analog computers are systems that perform computations by manipulating physical quantities such as electrical current, that map math variables, instead of representing information using abstraction with discrete binary values (i.e., 0 or 1), like digital computers.

While computing systems can perform well on general-purpose tasks, they are known to be susceptible to noise (i.e., background or external interferences) and less precise than .

Researchers at Peking University and the Beijing Advanced Innovation Center for Integrated Circuits have developed a scalable analog computing device that can solve so-called matrix equations with remarkable precision. This new system, introduced in a paper published in Nature Electronics, was built using tiny non-volatile memory devices known as resistive random-access memory (RRAM) chips.

Mathematical proof debunks the idea that the universe is a computer simulation

From the article:

“We have demonstrated that it is impossible to describe all aspects of physical reality using a computational theory of quantum gravity,” says Dr. Faizal. “Therefore, no physically complete and consistent theory of everything can be derived from computation alone. Rather, it requires a non-algorithmic understanding, which is more fundamental than the computational laws of quantum gravity and therefore more fundamental than spacetime itself.”


It’s a plot device beloved by science fiction: our entire universe might be a simulation running on some advanced civilization’s supercomputer. But new research from UBC Okanagan has mathematically proven this isn’t just unlikely—it’s impossible.

Dr. Mir Faizal, Adjunct Professor with UBC Okanagan’s Irving K. Barber Faculty of Science, and his international colleagues, Drs. Lawrence M. Krauss, Arshid Shabir and Francesco Marino have shown that the fundamental nature of reality operates in a way that no computer could ever simulate.

Their findings, published in the Journal of Holography Applications in Physics, go beyond simply suggesting that we’re not living in a simulated world like The Matrix. They prove something far more profound: the universe is built on a type of understanding that exists beyond the reach of any algorithm.

Gemini gets a huge upgrade for academics and researchers with powerful new LaTeX features

For anyone who has ever wrestled with creating documents containing complex mathematical equations, intricate tables, or precise multi-column layouts, the LaTeX document preparation system is likely a familiar (and sometimes frustrating) friend. It’s the standard for high-quality academic, scientific, and technical documents, but it traditionally requires specialized editors and significant technical know-how.

Midtraining Bridges Pretraining and Posttraining Distributions

Recently, many language models have been pretrained with a “midtraining” phase, in which higher quality, often instruction-formatted data, is mixed in at the end of pretraining. Despite the popularity of this practice, there is little scientific understanding of this phase of model training or why it is effective. In this work, we conduct the first systematic investigation of midtraining through controlled experiments with language models pretrained from scratch and fine-tuned on supervised finetuning datasets in different domains. We find that when compared after supervised fine-tuning, the effectiveness of midtraining is highest in the math and code domains, where midtraining can best reduce the syntactic gap between pretraining and posttraining data. In these cases, midtraining consistently outperforms continued pretraining in both in-domain validation loss as well as pretraining data forgetting after posttraining. We conduct ablations on the starting time of the midtraining phase and mixture weights of the midtraining data, using code midtraining as a case study, and find that timing has a greater impact than mixture weights, with earlier introduction of specialized data, yielding greater benefits in-domain as well as preserving general language modeling better. These findings establish midtraining as a domain adaptation technique that compared to continued pretraining yields better performance through reduced forgetting.

/* */