Toggle light / dark theme

NVIDIA Partners With Mistral AI to Accelerate New Family of Open Models

Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.

Mistral Large 3 is a mixture-of-experts (MoE) model — i nstead of firing up every neuron for every token, it only activates the parts of the model with the most impact. The result is efficiency that delivers scale without waste, accuracy without compromise and makes enterprise AI not just possible, but practical.

Mistral AI’s new models deliver industry-leading accuracy and efficiency for enterprise AI. It will be available everywhere, from the cloud to the data center to the edge, starting Tuesday, Dec. 2.

Lifeboat Foundation Guardian Award 2025: Professor Roman V. Yampolskiy

The Lifeboat Foundation Guardian Award is annually bestowed upon a respected scientist or public figure who has warned of a future fraught with dangers and encouraged measures to prevent them.

This year’s winner is Professor Roman V. Yampolskiy. Roman coined the term “AI safety” in a 2011 publication titled * Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach*, presented at the Philosophy and Theory of Artificial Intelligence conference in Thessaloniki, Greece, and is recognized as a founding researcher in the field.

Roman is known for his groundbreaking work on AI containment, AI safety engineering, and the theoretical limits of artificial intelligence controllability. His research has been cited by over 10,000 scientists and featured in more than 1,000 media reports across 30 languages.

Watch his interview on * The Diary of a CEO* at [ https://www.youtube.com/watch?v=UclrVWafRAI](https://www.youtube.com/watch?v=UclrVWafRAI) that has already received over 11 million views on YouTube alone. The Singularity has begun, please pay attention to what Roman has to say about it!


Professor Roman V. Yampolskiy who coined the term “AI safety” is winner of the 2025 Guardian Award.

Open-source framework enables addition of AI to software without prompt engineering

Developers can now integrate large language models directly into their existing software using a single line of code, with no manual prompt engineering required. The open-source framework, known as byLLM, automatically generates context-aware prompts based on the meaning and structure of the program, helping developers avoid hand-crafting detailed prompts, according to a conference paper presented at the SPLASH conference in Singapore in October 2025 and published in the Proceedings of the ACM on Programming Languages.

“This work was motivated by watching developers spend an enormous amount of time and effort trying to integrate AI models into applications,” said Jason Mars, an associate professor of computer science and engineering at U-M and co-corresponding author of the study.

Mixture of Experts Powers the Most Intelligent Frontier AI Models, Runs 10x Faster on NVIDIA Blackwell NVL72

“With GB200 NVL72 and Together AI’s custom optimizations, we are exceeding customer expectations for large-scale inference workloads for MoE models like DeepSeek-V3,” said Vipul Ved Prakash, cofounder and CEO of Together AI. “The performance gains come from NVIDIA’s full-stack optimizations coupled with Together AI Inference breakthroughs across kernels, runtime engine and speculative decoding.”

This performance advantage is evident across other frontier models.

Kimi K2 Thinking, the most intelligent open-source model, serves as another proof point, achieving 10x better generational performance when deployed on GB200 NVL72.

Second Variety (FULL audiobook) by Philip K. Dick

Second Variety audiobook.
by Philip K. Dick (1928 — 1982)

First published in Space Science Fiction May 1953. “The claws were bad enough in the first place—nasty, crawling little death-robots. But when they began to imitate their creators, it was time for the human race to make peace—if it could!” When future war becomes so horrific that humans turn to machines and computers to design ways to kill each other strange things may happen. And do in this classic Philip K. Dick story!(summary from the story blurb and Phil Chenevert)

Tesla Optimus running test sets new lab record

Tesla has released a new clip of its Optimus humanoid robot running inside a lab. The video comes from the official Optimus account on X and Elon Musk shared it with his followers soon after. The robot jogs past other units in the background and keeps a steady pace on the lab floor.

Also, The team said that they had set a new personal record in the lab. Musk also stressed the internal record had been broken, which puts fresh attention on how far the project has moved in a short time.

Just set a new PR in the lab pic.twitter.com/8kJ2om7uV7 — Tesla Optimus (@Tesla_Optimus) December 2, 2025

New control system teaches soft robots the art of staying safe

Imagine having a continuum soft robotic arm bend around a bunch of grapes or broccoli, adjusting its grip in real time as it lifts the object. Unlike traditional rigid robots that generally aim to avoid contact with the environment as much as possible and stay far away from humans for safety reasons, this arm senses subtle forces, stretching and flexing in ways that mimic more of the compliance of a human hand. Its every motion is calculated to avoid excessive force while achieving the task efficiently.

In the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Laboratory for Information and Decisions Systems (LIDS) labs, these seemingly simple movements are the culmination of complex mathematics, careful engineering, and a vision for robots that can safely interact with humans and delicate objects.

Soft robots, with their deformable bodies, promise a future where machines move more seamlessly alongside people, assist in caregiving, or handle delicate items in industrial settings. Yet that very flexibility makes them difficult to control. Small bends or twists can produce unpredictable forces, raising the risk of damage or injury. This motivates the need for safe control strategies for soft robots.

Cyber, AI & Critical Infrastructure Convergence Risks

By Chuck Brooks

#cybersecurity #artificialintelligence #criticalinfrastructure #risks


By Chuck Brooks, president of Brooks Consulting International

Federal agencies and their industry counterparts are moving at a breakneck pace to modernize in this fast-changing digital world. Artificial intelligence, automation, behavioral analytics, and autonomous decision systems have become integral to mission-critical operations. This includes everything from managing energy and securing borders to delivering healthcare, supporting defense logistics, and verifying identities. These technologies are undeniably enhancing capabilities. However, they are also subtly altering the landscape of risk.

The real concern isn’t any one technology in isolation, but rather the way these technologies now intersect and rely on each other. We’re leaving behind a world of isolated cyber threats. Now, we’re facing convergence risk, a landscape where cybersecurity, artificial intelligence, data integrity, and operational resilience are intertwined in ways that often remain hidden until a failure occurs. We’re no longer just securing networks. We’re safeguarding confidence, continuity, and the trust of society.

/* */