Toggle light / dark theme

Hardware vulnerability allows attackers to hack AI training data

Researchers from NC State University have identified the first hardware vulnerability that allows attackers to compromise the data privacy of artificial intelligence (AI) users by exploiting the physical hardware on which AI is run.

The paper, “GATEBLEED: A Timing-Only Membership Inference Attack, MoE-Routing Inference, and a Stealthy, Generic Magnifier Via Hardware Power Gating in AI Accelerators,” will be presented at the IEEE/ACM International Symposium on Microarchitecture (MICRO 2025), being held Oct. 18–22 in Seoul, South Korea. The paper is currently available on the arXiv preprint server.

“What we’ve discovered is an AI privacy attack,” says Joshua Kalyanapu, first author of a paper on the work and a Ph.D. student at North Carolina State University. “Security attacks refer to stealing things actually stored somewhere in a system’s memory—such as stealing an AI model itself or stealing the hyperparameters of the model. That’s not what we found. Privacy attacks steal stuff not actually stored on the system, such as the data used to train the model and attributes of the data input to the model. These facts are leaked through the behavior of the AI model. What we found is the first vulnerability that allows successfully attacking AI privacy via hardware.”

Theoretical Foundations of Artificial General Intelligence

This book is a collection of writings by active researchers in the field of Artificial General Intelligence, on topics of central importance in the field. Each chapter focuses on one theoretical problem, proposes a novel solution, and is written in sufficiently non-technical language to be understandable by advanced undergraduates or scientists in allied fields.

This book is the very first collection in the field of Artificial General Intelligence (AGI) focusing on theoretical, conceptual, and philosophical issues in the creation of thinking machines. All the authors are researchers actively developing AGI projects, thus distinguishing the book from much of the theoretical cognitive science and AI literature, which is generally quite divorced from practical AGI system building issues.

AI-based model can help traffic engineers predict future sites of possible crashes

In a significant step toward improving road safety, Johns Hopkins University researchers have developed an AI-based tool that can identify the risk factors contributing to car crashes across the United States and to accurately predict future incidents.

The tool, called SafeTraffic Copilot, aims to provide experts with both crash analyses and crash predictions to reduce the rising number of fatalities and injuries that happen on U.S. roads each year.

The work, led by Johns Hopkins University researchers, is published in Nature Communications.

BatShadow Group Uses New Go-Based ‘Vampire Bot’ Malware to Hunt Job Seekers

In October 2024, Cyble also disclosed details of a sophisticated multi-stage attack campaign orchestrated by a Vietnamese threat actor that targeted job seekers and digital marketing professionals with Quasar RAT using phishing emails containing booby-trapped job description files.

BatShadow is assessed to be active for at least a year, with prior campaigns using similar domains, such as samsung-work[.]com, to propagate malware families including Agent Tesla, Lumma Stealer, and Venom RAT.

“The BatShadow threat group continues to employ sophisticated social engineering tactics to target job seekers and digital marketing professionals,” Aryaka said. “By leveraging disguised documents and a multi-stage infection chain, the group delivers a Go-based Vampire Bot capable of system surveillance, data exfiltration, and remote task execution.”

Google’s New AI Doesn’t Just Find Vulnerabilities — It Rewrites Code to Patch Them

Google’s DeepMind division on Monday announced an artificial intelligence (AI)-powered agent called CodeMender that automatically detects, patches, and rewrites vulnerable code to prevent future exploits.

The efforts add to the company’s ongoing efforts to improve AI-powered vulnerability discovery, such as Big Sleep and OSS-Fuzz.

DeepMind said the AI agent is designed to be both reactive and proactive, by fixing new vulnerabilities as soon as they are spotted as well as rewriting and securing existing codebases with an aim to eliminate whole classes of vulnerabilities in the process.

Google won’t fix new ASCII smuggling attack in Gemini

Google has decided not to fix a new ASCII smuggling attack in Gemini that could be used to trick the AI assistant into providing users with fake information, alter the model’s behavior, and silently poison its data.

ASCII smuggling is an attack where special characters from the Tags Unicode block are used to introduce payloads that are invisible to users but can still be detected and processed by large-language models (LLMs).

It’s similar to other attacks that researchers presented recently against Google Gemini, which all exploit a gap between what users see and what machines read, like performing CSS manipulation or exploiting GUI limitations.

AI-radar system tracks subtle health changes by assessing patient’s walk

Engineering and health researchers at the University of Waterloo have developed a radar and artificial intelligence (AI) system that can monitor multiple people walking in busy hospitals and long-term care facilities to identify possible health issues.

The new technology—housed in a wall-mounted device about the size of a deck of cards—uses AI software and radar hardware to accurately measure how fast each person is walking. A paper on their work, “Non-contact, non-visual, multi-person hallway gait monitoring,” appears in Scientific Reports.

“Walking speed is often called a functional vital sign because even subtle declines can be an early warning of health problems,” said Dr. Hajar Abedi, a former postdoctoral researcher in electrical and computer engineering at Waterloo.

/* */