Toggle light / dark theme

Transcutaneous Peripheral Nerve Stimulation for Essential Tremor: A Randomized Clinical Trial

Essential tremor (ET), the most common upper limb tremor, can impair daily activities. In a multicenter RCT, an artificial intelligence–driven transcutaneous peripheral nerve stimulation (TPNS) device reduced mADL scores by 6.9 points at 90 days, compared with a 2.7-point reduction in the sham group.


Question Is an artificial intelligence (AI)–driven TPNS device superior to a sham device in reducing essential tremor?

Findings In this randomized clinical trial that included 125 adults with essential tremor, use of the TPNS device reduced the modified Activities of Daily Living score of the Essential Tremor Rating Assessment Scale by a clinically meaningful 6.9 points at 90 days, significantly more than the 2.7-point reduction seen in the sham-treated group.

Meaning The TPNS device improved activities related to upper limb tremor at 90 days and could be an effective noninvasive treatment for essential tremor.

Math, Inc.

The Math Inc. team is excited to introduce Gauss, a first-of-its-kind autoformalization agent for assisting human expert mathematicians at formal verification. Using Gauss, we have completed a challenge set by Fields Medallist Terence Tao and Alex Kontorovich in January 2024 to formalize the strong Prime Number Theorem (PNT) in Lean (GitHub).

The translation of human mathematics into verifiable machine code has long been a grand challenge. However, the cost of doing so is prohibitive, requiring scarce human expertise. In particular, after 18 months, Tao and Kontorovich recently announced intermediate progress in July 2025 toward their goal, obstructed by core difficulties in the field of complex analysis.

In light of such difficulties, we are pleased to announce that with Gauss, we have completed the project after three weeks of effort. Gauss can work autonomously for hours, dramatically compressing the labor previously reserved for top formalization experts. Along the way, Gauss formalized the key missing results in complex analysis, which opens up future initiatives previously considered unapproachable.

Nvidia unveils new open-source AI models amid boom in Chinese offerings

Nvidia on Monday revealed the third generation of its “Nemotron” large-language models aimed at writing, coding and other tasks. The smallest of the models, called Nemotron 3 Nano, was being released Monday, with two other, larger versions coming in the first half of 2026.

Nvidia, which has become the world’s most valuable listed company, said that Nemotron 3 Nano was more efficient than its predecessor — meaning it would be cheaper to run — and would do better at long tasks with multiple steps.

Nvidia is releasing the models as open-source offerings from Chinese tech firms such as DeepSeek, Moonshot AI and Alibaba Group Holdings are becoming widely used in the tech industry, with companies such as Airbnb disclosing use of Alibaba’s Qwen open-source model.

AI helps explain how covert attention works and uncovers new neuron types

Shifting focus on a visual scene without moving our eyes—think driving, or reading a room for the reaction to your joke—is a behavior known as covert attention. We do it all the time, but little is known about its neurophysiological foundation.

Now, using convolutional neural networks (CNNs), UC Santa Barbara researchers Sudhanshu Srivastava, Miguel Eckstein and William Wang have uncovered the underpinnings of covert attention, and in the process, have found new, emergent neuron types, which they confirmed in real life using data from mouse brain studies.

“This is a clear case of AI advancing neuroscience, cognitive sciences and psychology,” said Srivastava, a former graduate student in the lab of Eckstein, now a postdoctoral researcher at UC San Diego.

Sub-millimeter-sized robots can sense, ‘think’ and act on their own

Robots small enough to travel autonomously through the human body to repair damaged sites may seem the stuff of science fiction dreams. But this vision of surgery on a microscale is a step closer to reality, with news that researchers from the University of Pennsylvania and the University of Michigan have built a robot smaller than a millimeter that has an onboard computer and sensors.

Scientists have been trying for decades to develop microscopic robots, not only for medical applications but also for environmental monitoring and manufacturing. However, they have faced formidable challenges. Existing microbots typically require large, external control systems, such as powerful magnets and lasers, and cannot make autonomous decisions in unfamiliar environments.

AI helps solve decades-old maze in frustrated magnet physics

The study, conducted by Brookhaven theoretical physicist Weiguo Yin and described in a recent paper published in Physical Review B, is the first paper emerging from the “AI Jam Session” earlier this year, a first-of-its-kind event hosted by DOE and held in cooperation with OpenAI to push the limits of general-purpose large language models applied to science research. The event brought together approximately 1,600 scientists across nine host locations within the DOE national laboratory complex. At Brookhaven, more than 120 scientists challenged and evaluated the capabilities of OpenAI’s latest step-based logical reasoning AImodel built for complex problem solving.

Yin’s AI study focused on a class of advanced materials known as frustrated magnets. In these systems, the electron spins—the tiny magnetic moments carried by each electron—cannot settle on an orientation because competing interactions pull them in different directions. These materials have unique and fascinating properties that could translate to novel applications in the energy and information technology industries.

New agentic AI platform accelerates advanced optics design

Stanford engineers debuted a new framework introducing computational tools and self-reflective AI assistants, potentially advancing fields like optical computing and astronomy.

Hyper-realistic holograms, next-generation sensors for autonomous robots, and slim augmented reality glasses are among the applications of metasurfaces, emerging photonic devices constructed from nanoscale building blocks.

Now, Stanford engineers have developed an AI framework that rapidly accelerates metasurface design, with potential widespread technological applications. The framework, called MetaChat, introduces new computational tools and self-reflective AI assistants, enabling rapid solving of optics-related problems. The findings were reported recently in the journal Science Advances.

Genie 3: Creating dynamic worlds that you can navigate in real-time

Genie 3 is a world builder powered by generative AI. It appears that it could in principle be built into a game engine.

One thing I’d like to do is have procedural generation as the backbone, and have generative AI modify things further that regular proc-gen textures just are not able to accomplish.


Introducing Genie 3, a general purpose world model that can generate an unprecedented diversity of interactive environments. Given a text prompt, Genie 3 can generate dynamic worlds that you can navigate in real time at 24 frames per second, retaining consistency for a few minutes at a resolution of 720p.

Watch the Google DeepMind episode on Genie 3 with Hannah Fry here: • Genie 3: An infinite world model | Shlomi…

Our team has been pioneering research in simulated environments for over a decade, from training agents to master real-time strategy games to developing simulated environments for open-ended learning and robotics. This work motivated our development of world models, which are AI systems that can use their understanding of the world to simulate aspects of it, enabling agents to predict both how an environment will evolve and how their actions will affect it.

Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks

The study presents Predictive Alignment, a local learning rule for recurrent neural networks that aligns internal network predictions with feedback. This biologically inspired method tames chaos and enables robust learning of complex patterns.

/* */