A team of scientists has investigated how Earth’s twin became so inhospitable, and whether the same will happen to our planet.

As AI becomes more integrated into our lives, building it with privacy at its core is a critical frontier for the field. Differential privacy (DP) offers a mathematically sound solution by adding calibrated noise to prevent memorization. However, applying DP to LLMs introduces trade-offs. Understanding these trade-offs is crucial. Applying DP noise alters traditional scaling laws — rules describing performance dynamics — by reducing training stability (the model’s ability to learn consistently without experiencing catastrophic events like loss spikes or divergence) and significantly increasing batch size (a collection of training examples sent to the model simultaneously for processing) and computation costs.
Our new research, “Scaling Laws for Differentially Private Language Models”, conducted in partnership with Google DeepMind, establishes laws that accurately model these intricacies, providing a complete picture of the compute-privacy-utility trade-offs. Guided by this research, we’re excited to introduce VaultGemma, the largest (1B-parameters), open model trained from scratch with differential privacy. We are releasing the weights on Hugging Face and Kaggle, alongside a technical report, to advance the development of the next generation of private AI.
Scientists have discovered that the protective cell layers lining our organs operate like an electrical surveillance system, using lightning-like flashes to identify and eliminate their most energy-depleted neighbors. This cellular quality control mechanism, revealed in a new Nature study, could reshape our understanding of diseases from cancer to stroke.
The research team from King’s College London and the Francis Crick Institute uncovered this process while studying epithelial cells – the tightly packed cellular barriers that line every organ in the human body. These cells constantly turnover to maintain healthy protective layers, but researchers had long puzzled over which specific cells get selected for elimination in crowded tissues.
Using specialized microscopy, the scientists noticed something unexpected: brief, lightning-like electrical flashes around cells just before they were squeezed out and died. This electrical signature, they discovered, wasn’t random but represented a sophisticated energy-sensing mechanism that targets the cellular equivalent of the weakest links.
A strand of hair might seem like an unlikely window into a child’s psychological wellbeing, but new research from the University of Waterloo suggests that measuring stress hormones in hair samples could help identify which children with chronic illnesses are most at risk for developing serious mental health problems.
The four-year study of 244 Canadian children reveals a concerning pattern: more than two-thirds of kids living with chronic physical conditions showed persistently elevated levels of cortisol, the body’s primary stress hormone, measured through their hair. These children also displayed more symptoms of depression, anxiety, and behavioral problems compared to peers whose stress levels naturally declined over time.
A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). The need for math engines specifically designed to support machine learning algorithms, particularly for inference workloads but also for certain kinds of training, has been covered extensively here at The Next Platform.
Just to rattle off a few of them, consider the impending “Cirrus” Power10 processor from IBM, which is due in a matter of days from Big Blue in its high-end NUMA machines and which has a new matrix math engine aimed at accelerating machine learning. Or IBM’s “Telum” z16 mainframe processor coming next year, which was unveiled at the recent Hot Chips conference and which has a dedicated mixed precision matrix math core for the CPU cores to share. Intel is adding its Advanced Matrix Extensions (AMX) to its future “Sapphire Rapids” Xeon SP processors, which should have been here by now but which have been pushed out to early next year. Arm Holdings has created future Arm core designs, the “Zeus” V1 core and the “Perseus” N2 core, that will have substantially wider vector engines that support the mixed precision math commonly used for machine learning inference, too. Ditto for the vector engines in the “Milan” Epyc 7,003 processors from AMD.
All of these chips are designed to keep inference on the CPUs, where in a lot of cases it belongs because of data security, data compliance, and application latency reasons.
A team of researchers at Rice University has developed a faster and cleaner method for recovering aluminum and removing toxic metals from bauxite residue, or red mud, which is a hazardous by-product of aluminum production.
This new technique, published in ACS Applied Materials and Interfaces, involves a brief electrical pulse lasting under one minute, along with a small amount of chlorine gas. If implemented on a larger scale, it could revolutionize global waste management and materials recovery.
The process uses flash joule heating (FJH), which rapidly heats materials with a short, high-power electrical pulse to vaporize harmful metals, leaving behind a residue rich in aluminum. This aluminum-rich material can then be repurposed into durable ceramic tiles or bricks or resubjected to the normal aluminum production process. The method offers a practical and scalable solution to address a significant pollution problem by transforming it into valuable materials, marking an advancement in industrial waste recovery.