Toggle light / dark theme

Ignorance Is the Greatest Evil: Why Certainty Does More Harm Than Malice

The most dangerous people are not the malicious ones. They’re the ones who are certain they’re right.

Most of the harm in history has been done by people who believed they knew what was right — and acted on that belief without recognizing the limits of their own knowledge.

Socrates understood this long ago: the most dangerous is not *not knowing*, but *not knowing that we don’t know* — especially when paired with power.

Read on to find why:

* certainty often does more harm than malice * humility isn’t weakness, it’s discipline * action doesn’t require certainty, only responsibility * and why, in an age of systems, algorithms, and institutions, has quietly become structural.

This isn’t an argument for paralysis or relativism.

It’s an argument for acting without pretending we are infallible.

AI learns to build simple equations for complex systems

A research team at Duke University has developed a new AI framework that can uncover simple, understandable rules that govern some of the most complex dynamics found in nature and technology.

The AI system works much like how history’s great “dynamicists”—those who study systems that change over time—discovered many laws of physics that govern such systems’ behaviors. Similar to how Newton, the first dynamicist, derived the equations that connect force and movement, the AI takes data about how complex systems evolve over time and generates equations that accurately describe them.

The AI, however, can go even further than human minds, untangling complicated nonlinear systems with hundreds, if not thousands, of variables into simpler rules with fewer dimensions.

Making lighter work of calculating fluid and heat flow

Scientists from Tokyo Metropolitan University have re-engineered the popular Lattice-Boltzmann Method (LBM) for simulating the flow of fluids and heat, making it lighter and more stable than the state-of-the-art.

By formulating the algorithm with a few extra inputs, they successfully got around the need to store certain data, some of which span the millions of points over which a simulation is run. Their findings might overcome a key bottleneck in LBM: memory usage.

The work is published in the journal Physics of Fluids.

Cracking the mystery of heat flow in few-atoms thin materials

For much of my career, I have been fascinated by the ways in which materials behave when we reduce their dimensions to the nanoscale. Over and over, I’ve learned that when we shrink a material down to just a few nanometers in thickness, the familiar textbook rules of physics begin to bend, stretch, or sometimes break entirely. Heat transport is one of the areas where this becomes especially intriguing, because heat is carried by phonons—quantized vibrations of the atomic lattice—and phonons are exquisitely sensitive to spatial confinement.

A few years ago, something puzzling emerged in the literature. Molecular dynamics simulations showed that ultrathin silicon films exhibit a distinct minimum in their thermal conductivity at around one to two nanometers thickness, which corresponds to just a few atomic layers. Even more surprisingly, the thermal conductivity starts to increase again if the material is made even thinner, approaching extreme confinement and the 2D limit.

This runs counter to what every traditional model would predict. According to classical theories such as the Boltzmann transport equation or the Fuchs–Sondheimer boundary-scattering framework, reducing thickness should monotonically suppress thermal conductivity because there is simply less room for phonons to travel freely and carry heat around. Yet the simulations done by the team of Alan McGaughey at Carnegie Mellon University in Pittsburgh insisted otherwise, and no established theory could explain why.

Why we can’t stop clicking on rage bait

Stanford research reveals creators feel exhausted, depressed, and financially unstable due to constant pressure to post, algorithm unpredictability, and frequent “demonetization.” While rage bait may work short-term, it’s unsustainable. Creators eventually seek other revenue streams, only to be replaced by new outrage merchants.

Bottom line: Rage bait is a symptom of platforms’ engagement-based economic incentives—not an isolated phenomenon, but a “highly visible result” of the ecosystem social media companies have created.


“Rage bait” is Oxford’s Word of the Year. What makes anger so appealing?

String Theory Inspires a Brilliant, Baffling New Math Proof

When the team posted their proof in August, many mathematicians were excited. It was the biggest advance in the classification project in decades, and hinted at a new way to tackle the classification of polynomial equations well beyond four-folds.

But other mathematicians weren’t so sure. Six years had passed since the lecture in Moscow. Had Kontsevich finally made good on his promise, or were there still details to fill in?

And how could they assuage their doubts, when the proof’s techniques were so completely foreign — the stuff of string theory, not polynomial classification? “They say, ‘This is black magic, what is this machinery?’” Kontsevich said.

AI Guides Robot on the ISS for the First Time

Dr. Somrita Banerjee: “This is the first time AI has been used to help control a robot on the ISS. It shows that robots can move faster and more efficiently without sacrificing safety, which is essential for future missions where humans won’t always be able to guide them.”


How can an AI robot help improve human space exploration? This is what a recent study presented at the 2025 International Conference on Space Robotics hopes to address as a team of researchers investigated new methods for enhancing AI robots in space. This study has the potential to help scientists develop new methods for enhancing human-robotic relationships, specifically as humanity begins settling on the Moon and eventually Mars.

For the study, the researchers examined how a technique called machine learning-based warm starts could be used to improve robot autonomy. To accomplish this, the researchers launched the Astrobee free-flying robot to the International Space Station (ISS), where its algorithm was tested floating around the ISS in microgravity. The goal of the study was to ascertain if Astrobee could navigate its way around the ISS without the need for human intervention, relying only on its algorithm to determine safely traversing the ISS. In the end, the researchers found that Astrobee successfully navigated the tight terrain of the ISS with limited need for human intervention.

New model frames human reinforcement learning in the context of memory and habits

Humans and most other animals are known to be strongly driven by expected rewards or adverse consequences. The process of acquiring new skills or adjusting behaviors in response to positive outcomes is known as reinforcement learning (RL).

RL has been widely studied over the past decades and has even been adapted to train some computational models, such as some deep learning algorithms. Existing models of RL suggest that this type of learning is linked to dopaminergic pathways (i.e., neural pathways that respond to differences between expected and experienced outcomes).

Anne G. E. Collins, a researcher at University of California, Berkeley, recently developed a new model of RL specific to situations in which people’s choices have uncertain context-dependent outcomes, and they try to learn the actions that will lead to rewards. Her paper, published in Nature Human Behaviour, challenges the assumption that existing RL algorithms faithfully mirror psychological and neural mechanisms.

Infant-inspired framework helps robots learn to interact with objects

Over the past decades, roboticists have introduced a wide range of advanced systems that can move around in their surroundings and complete various tasks. Most of these robots can effectively collect images and other data in their surroundings, using computer vision algorithms to interpret it and plan their future actions.

In addition, many robots leverage large language models (LLMs) or other natural language processing (NLP) models to interpret instructions, make sense of what users are saying and answer them in specific languages. Despite their ability to both make sense of their surroundings and communicate with users, most robotic systems still struggle when tackling tasks that require them to touch, grasp and manipulate objects, or come in physical contact with people.

Researchers at Tongji University and State Key Laboratory of Intelligent Autonomous Systems recently developed a new framework designed to improve the process via which robots learn to physically interact with their surroundings.

/* */