Toggle light / dark theme

OpenAI’s Chief Scientist: AI Could Produce Novel Research by the End of the Decade

Jakub Pachocki, OpenAI’s chief scientist since 2024, believes artificial intelligence models will soon be capable of producing original research and making measurable economic impacts. In a conversation with Nature, Pachocki outlined how he sees the field evolving — and how OpenAI plans to balance innovation with safety concerns.

Pachocki, who joined OpenAI in 2017 after a career in theoretical computer science and competitive programming, now leads the firm’s development of its most advanced AI systems. These systems are designed to tackle complex tasks across science, mathematics, and engineering, moving far beyond the chatbot functions that made ChatGPT a household name in 2022.

Black hole dance illuminates hidden math of the universe

In the new study, however, these shapes appeared in calculations describing the energy radiated as gravitational waves when two black holes cruised past one another. This marks the first time they’ve appeared in a context that could, in principle, be tested through real-world experiments.

Mogull likens their emergence to switching from a magnifying glass to a microscope, revealing features and patterns previously undetectable. “The appearance of such structures sheds new light on the sorts of mathematical objects that nature is built from,” he said.

These findings are expected to significantly enhance future theoretical models that aim to predict gravitational wave signatures. Such improvements will be crucial as next-generation gravitational wave detectors — including the planned Laser Interferometer Space Antenna (LISA) and the Einstein Telescope in Europe — come online in the years ahead.

Nick Bostrom — From Superintelligence to Deep Utopia — Can We Create a Perfect Society?

Since Nick Bostrom wrote Superintelligence, AI has surged from theoretical speculation to powerful, world-shaping reality. Progress is undeniable, yet there is an ongoing debate in the AI safety community – caught between mathematical rigor and swiss-cheese security. P(doom) debates rage on, but equally concerning is the risk of locking in negative-value futures for a very long time.

Zooming in: motivation selection-especially indirect normativity-raises the question: is there a structured landscape of possible value configurations, or just a chaotic search for alignment?

From Superintelligence to Deep Utopia: not just avoiding catastrophe but ensuring resilience, meaning, and flourishing in a’solved’ world; a post instrumental, plastic utopia – where humans are ‘deeply redundant’, can we find enduring meaning and purpose?

This is our moment to shape the future. What values will we encode? What futures will we entrench?

0:00 Highlights.
3:07 Intro.
4:15 Interview.

P.s. the background music at the start of the video is ’ Eta Carinae ’ which I created on a Korg Minilogue XD: https://scifuture.bandcamp.com/track/.… music at the end is ‘Hedonium 1′ which is guitar saturated with Strymon reverbs, delays and modulation: / hedonium-1 Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Buy me a coffee? https://buymeacoffee.com/tech101z Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

Meet AlphaEvolve, the Google AI that writes its own code—and just saved millions in computing costs

Google DeepMind’s AlphaEvolve AI system breaks a 56-year-old mathematical record by discovering a more efficient matrix multiplication algorithm that had eluded human mathematicians since Strassen’s 1969 breakthrough.

AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms

Large language models (LLMs) are remarkably versatile. They can summarize documents, generate code or even brainstorm new ideas. And now we’ve expanded these capabilities to target fundamental and highly complex problems in mathematics and modern computing.

Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.

AlphaEvolve enhanced the efficiency of Google’s data centers, chip design and AI training processes — including training the large language models underlying AlphaEvolve itself. It has also helped design faster matrix multiplication algorithms and find new solutions to open mathematical problems, showing incredible promise for application across many areas.

Dark matter formed when fast particles slowed down and got heavy, new theory says

A study by Dartmouth researchers proposes a new theory about the origin of dark matter, the mysterious and invisible substance thought to give the universe its shape and structure. They say the hypothetical force shaping the universe sprang from particles that rapidly condensed, like steam into water.

The researchers report in Physical Review Letters that could have formed in the early life of the universe from the collision of high-energy massless particles that lost their zip and took on an incredible amount of mass immediately after pairing up, according to their mathematical models.

Hypothetical dark matter is believed to exist based on observed gravitational effects that cannot be explained by visible matter. Scientists estimate that 85% of the universe’s total mass is dark matter.

Energy and memory: A new neural network paradigm

Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations—it’s a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality.

“It’s a network effect,” said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that aren’t stored in single brain cells. “Memory storage and are dynamic processes that occur over entire networks of neurons.”

In 1982, physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks—the Hopfield network—known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024.

Researchers Solve “Impossible” Math Problem After 200 Years

A mathematician has developed an algebraic solution to an equation that was long thought to be unsolvable. A groundbreaking discovery from a UNSW Sydney mathematician may finally offer a solution to one of algebra’s toughest problems: how to solve high-degree polynomial equations. Polynomials

Wow! AI-Powered “Sketch-on-Napkin” to Embedded Design

I was just thinking about the 1871 book Through the Looking-Glass, and What Alice Found There by mathematician, logician, Anglican deacon, writer, and photographer, the Reverend Charles Lutwidge Dodgson (a.k.a. Lewis Carroll).

Lewis Carroll’s Alice’s Adventures in Wonderland and Through the Looking-Glass continue to influence us today, not just as beloved children’s stories but as enduring works that challenge the boundaries of logic, language, and imagination.

At their heart, both books are filled with logical conundrums, puzzling paradoxes, and playful reasoning, reflecting Charles’ background in math and logic. He employed nonsensical situations and absurd dialogues to explore profound ideas about meaning, identity, time, and even mathematics, all disguised within fantastical storytelling.