Toggle light / dark theme

Shapeshifting soft robot uses electric fields to swing like a gymnast

Researchers have invented a new super agile robot that can cleverly change shape thanks to amorphous characteristics akin to the popular Marvel anti-hero Venom.

The unique soft morphing creation, developed by the University of Bristol and Queen Mary University of London, is much more adaptable than current . The study, published in the journal Advanced Materials, showcases an electro-morphing gel jelly-like humanoid gymnast that can move from one place to another using its flexible body and limbs.

Researchers used a special material called electro-morphing gel (e-MG) which allows the robot to show shapeshifting functions, allowing them to bend, stretch, and move in ways that were previously difficult or impossible, through manipulation of electric fields from ultralightweight electrodes.

Size doesn’t matter: Just a small number of malicious files can corrupt LLMs of any size

Large language models (LLMs), which power sophisticated AI chatbots, are more vulnerable than previously thought. According to research by Anthropic, the UK AI Security Institute and the Alan Turing Institute, it only takes 250 malicious documents to compromise even the largest models.

The vast majority of data used to train LLMs is scraped from the public internet. While this helps them to build knowledge and generate natural responses, it also puts them at risk from data poisoning attacks. It had been thought that as models grew, the risk was minimized because the percentage of poisoned data had to remain the same. In other words, it would need massive amounts of data to corrupt the largest models. But in this study, which is published on the arXiv preprint server, researchers showed that an attacker only needs a small number of poisoned documents to potentially wreak havoc.

To assess the ease of compromising large AI models, the researchers built several LLMs from scratch, ranging from small systems (600 million parameters) to very large (13 billion parameters). Each model was trained on vast amounts of clean public data, but the team inserted a fixed number of malicious files (100 to 500) into each one.

Method teaches generative AI models to locate personalized objects

Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog owner to do while onsite.

But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.

To address this shortcoming, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.

Algorithm precisely quantifies flow of information in complex networks

Networks are systems comprised of two or more connected devices, biological organisms or other components, which typically share information with each other. Understanding how information moves between these connected components, also known as nodes, could help to advance research focusing on numerous topics, ranging from artificial intelligence (AI) to neuroscience.

To measure the directional flow of information in systems, scientists typically rely on a mathematical construct known as transfer entropy, which essentially quantifies the rate at which information is transmitted from one node to another. Yet most strategies for calculating transfer entropy developed so far rely on approximations, which significantly limits their accuracy and reliability.

Researchers at AMOLF, a institute in the Netherlands, recently developed a computational algorithm that can precisely quantify transfer entropy in a wide range of complex networks. Their algorithm, introduced in a paper published in Physical Review Letters, opens new exciting possibilities for the study of information transfer in both biological and engineered networks.

AGI is still a decade away

Reinforcement learning is terrible — but everything else is worse.

Karpathy’s sharpest takes yet on AGI, RL, and the future of learning.

Andrej Karpathy’s vision of AGI isn’t a bang — it’s a gradient descent through human history.

Karpathy on AGI & Superintelligence.

* AGI won’t be a sudden singularity — it will blend into centuries of steady progress (~2% GDP growth).

* Superintelligence is uncertain and likely gradual, not an instant “explosion.”

LLMs Can Get “Brain Rot”!

We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions. Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges’ $g

Large language models prioritize helpfulness over accuracy in medical contexts, finds study

Large language models (LLMs) can store and recall vast quantities of medical information, but their ability to process this information in rational ways remains variable. A new study led by investigators from Mass General Brigham demonstrated a vulnerability in that LLMs are designed to be sycophantic, or excessively helpful and agreeable, which leads them to overwhelmingly fail to appropriately challenge illogical medical queries despite possessing the information necessary to do so.

Findings, published in npj Digital Medicine, demonstrate that targeted training and fine-tuning can improve LLMs’ abilities to respond to illogical prompts accurately.

“As a community, we need to work on training both patients and clinicians to be safe users of LLMs, and a key part of that is going to be bringing to the surface the types of errors that these models make,” said corresponding author Danielle Bitterman, MD, a faculty member in the Artificial Intelligence in Medicine (AIM) Program and Clinical Lead for Data Science/AI at Mass General Brigham.

Amazon reveals 960 megawatt nuclear power plans to cope with AI demand — Richland, Washington site tapped for deployment of Xe-100 small modular reactors

The Cascade Advanced Energy Facility would use next-gen Xe-100 reactors to deliver 960 megawatts of carbon-free power — but it’s years from becoming reality.

/* */