Toggle light / dark theme

AI models mirror human ‘us vs. them’ social biases, study shows

Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.

One of these biases, known as the “us vs. them” bias, is the tendency of people to prefer groups they belong to, viewing other groups less favorably. This effect is well-documented in humans, but it has so far remained largely unexplored in LLMs.

Researchers at University of Vermont’s Computational Story Lab and Computational Ethics Lab recently carried out a study investigating the possibility that LLMs “absorb” the “us vs. them” bias from the texts that they are trained on, exhibiting a similar tendency to prefer some groups over others. Their paper, posted to the arXiv preprint server, suggests that many widely used models tend to express a preference for groups that are referred to favorably in training texts, including GPT-4.1, DeepSeek-3.1, Gemma-2.0, Grok-3.0 and LLaMA-3.1.

Growth chambers could enable reproducible plant-microbe data across continents

Harnessing the power of artificial intelligence to study plant microbiomes—communities of microbes living in and around plants—could help improve soil health, boost crop yields, and restore degraded lands. But there’s a catch: AI needs massive amounts of reliable data to learn from, and that kind of consistent information about plant-microbe interactions has been hard to come by.

In a new paper in PLOS Biology, researchers in the Biosciences Area at Lawrence Berkeley National Laboratory (Berkeley Lab) led an international consortium of scientists to study whether small plastic growth chambers called EcoFABs could help solve this problem.

Building on their previous work with microbe-free plants, the scientists used the Berkeley Lab-developed devices to run identical plant–microbe experiments across labs on three continents and got matching results. The breakthrough shows that EcoFABs can remove one of the biggest barriers in microbiome research: the difficulty of reproducing experiments in different places.

AI method advances customized enzyme design

Enzymes with specific functions are becoming increasingly important in industry, medicine and environmental protection. For example, they make it possible to synthesize chemicals in a more environmentally friendly way, produce active ingredients in a targeted manner or break down environmentally harmful substances.

Researchers from Gustav Oberdorfer’s working group at the Institute of Biochemistry at Graz University of Technology (TU Graz), together with colleagues from the University of Graz, have now published a study in Nature describing a new method for the design of customized enzymes.

The technology called Riff-Diff (Rotamer Inverted Fragment Finder–Diffusion) makes it possible to accurately and efficiently build the protein structure specifically around the active center instead of searching for a suitable structure from existing databases. The resulting enzymes are not only significantly more active than previous artificial enzymes, but also more stable.

Curl ending bug bounty program after flood of AI slop reports

The developer of the popular curl command-line utility and library announced that the project will end its HackerOne security bug bounty program at the end of this month, after being overwhelmed by low-quality AI-generated vulnerability reports.

The change was first discovered in a pending commit to curl’s BUG-BOUNTY.md documentation, which removes all references to the HackerOne program.

Once merged, the file will be updated to state that the curl project no longer offers any rewards for reported bugs or vulnerabilities and will not help researchers obtain compensation from third parties either.

CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant

Wondering what your career looks like in our increasingly uncertain, AI-powered future? According to Palantir CEO Alex Karp, it’s going to involve less of the comfortable office work to which most people aspire, a more old fashioned grunt work with your hands.

Speaking at the World Economic Forum yesterday, Karp insisted that the future of work is vocational — not just for those already in manufacturing and the skilled trades, but for the majority of humanity.

In the age of AI, Karp told attendees at a forum, a strong formal education in any of the humanities will soon spell certain doom.

Yann LeCun joins Logical Intelligence as founding chair of research board for energy based AI reasoning

US-based artificial intelligence (AI) startup Logical Intelligence has appointed Yann LeCun, former chief AI scientist at Meta, as the founding chair of its Technical Research Board, the company announced on January 22.

LeCun, one of the world’s most influential AI researchers and a Turing Award winner, left Meta late last year to launch his own startup, Advanced Machine Intelligence Labs, focused on building “world models” that can understand and navigate the physical environment. His decision to join Logical Intelligence signals a growing interest in alternatives to large language models for high-risk, real-world systems.

The company, founded by Eve Bodnia, also announced its flagship reasoning engine, Kona 1.0. A live public demonstration of Kona has been released on the company’s website.

Biomimetic multimodal tactile sensing enables human-like robotic perception

Robots That Feel: A New Multimodal Touch System Closes the Gap with Human Perception.

In a major advance for robotic sensing, researchers have engineered a biomimetic tactile system that brings robots closer than ever to human-like touch. Unlike traditional tactile sensors that detect only force or pressure, this new platform integrates multiple sensing modalities into a single ultra-thin skin and combines it with large-scale AI for data interpretation.

At the heart of the system is SuperTac, a 1-millimeter-thick multimodal tactile layer inspired by the multispectral structure of pigeon vision. SuperTac compresses several physical sensing modalities — including multispectral optical imaging (from ultraviolet to mid-infrared), triboelectric contact sensing, and inertial measurements — into a compact, flexible skin. This enables simultaneous detection of force, contact position, texture, material, temperature, proximity and vibration with micrometer-level spatial precision. The sensor achieves better than 94% accuracy in classifying complex tactile features such as texture, material type, and slip dynamics.

However, the hardware alone isn’t enough: rich, multimodal tactile data need interpretation. To address this, the team developed DOVE, an 8.5-billion-parameter tactile language model that functions as a computational interpreter of touch. By learning patterns in the high-dimensional sensor outputs, DOVE provides semantic understanding of tactile interactions — a form of “touch reasoning” that goes beyond raw signal acquisition.

From a neurotech-inspired perspective, this work mirrors principles of biological somatosensation: multiple receptor types working in parallel, dense spatial encoding, and higher-order processing for perceptual meaning. Integrating rich physical sensing with model-based interpretation is akin to how the somatosensory cortex integrates mechanoreceptor inputs into coherent percepts of texture, shape and motion. Such hardware-software co-design — where advanced materials, optics, electronics and AI converge — offers a pathway toward embodied intelligence in machines that feel and interpret touch much like biological organisms do.

Biomimetic multimodal tactile sensing enables human-like robotic perception.


/* */