Toggle light / dark theme

Genie 3: Creating dynamic worlds that you can navigate in real-time

Genie 3 is a world builder powered by generative AI. It appears that it could in principle be built into a game engine.

One thing I’d like to do is have procedural generation as the backbone, and have generative AI modify things further that regular proc-gen textures just are not able to accomplish.


Introducing Genie 3, a general purpose world model that can generate an unprecedented diversity of interactive environments. Given a text prompt, Genie 3 can generate dynamic worlds that you can navigate in real time at 24 frames per second, retaining consistency for a few minutes at a resolution of 720p.

Watch the Google DeepMind episode on Genie 3 with Hannah Fry here: • Genie 3: An infinite world model | Shlomi…

Our team has been pioneering research in simulated environments for over a decade, from training agents to master real-time strategy games to developing simulated environments for open-ended learning and robotics. This work motivated our development of world models, which are AI systems that can use their understanding of the world to simulate aspects of it, enabling agents to predict both how an environment will evolve and how their actions will affect it.

Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks

The study presents Predictive Alignment, a local learning rule for recurrent neural networks that aligns internal network predictions with feedback. This biologically inspired method tames chaos and enables robust learning of complex patterns.

Quilter’s AI just designed an 843‑part Linux computer that booted on the first try. Hardware will never be the same

🤖AI system designed a fully functional Linux computer in one week.


Quilter’s AI designed a working 843-component Linux computer in 38 hours—a task that typically takes engineers 11 weeks. Here’s how they did it.

Spatial Intelligence Is AI’s Next Frontier

Today, leading AI technology such as large language models (LLMs) have begun to transform how we access and work with abstract knowledge. Yet they remain wordsmiths in the dark, eloquent but inexperienced, knowledgeable but ungrounded.

For humans, spatial intelligence is the scaffolding upon which our cognition is built. It’s at work when we passively observe or actively seek to create. It drives our reasoning and planning, even on the most abstract topics. And it’s essential to the way we interact—verbally or physically, with our peers or with the environment itself. When machines are endowed with this ability, it will transform how we create and interact with real and virtual worlds—revolutionizing storytelling, robotics, scientific discovery, and beyond. This is AI’s next frontier, and why 2025 was such a pivotal year.

The candid truth is that AI’s spatial capabilities remain far from the human level. But tremendous progress has indeed been made. Multimodal LLMs, trained with voluminous multimedia data in addition to textual data, have introduced some basics of spatial awareness, and today’s AI can analyze pictures, answer questions about them, and generate hyperrealistic images and short videos.

/* */