Menu

Blog

Jul 28, 2021

Hiding malware inside AI neural networks

Posted by in categories: cybercrime/malcode, robotics/AI

A trio of researchers at Cornell University has found that it is possible to hide malware code inside of AI neural networks. Zhi Wang, Chaoge Liu and Xiang Cui have posted a paper describing their experiments with injecting code into neural networks on the arXiv preprint server.

As grows ever more complex, so do attempts by criminals to break into machines running new technology for their own purposes, such as destroying data or encrypting it and demanding payment from users for its return. In this new study, the team has found a new way to infect certain kinds of computer systems running artificial intelligence applications.

AI systems do their work by processing data in ways similar to the . But such networks, the research trio found, are vulnerable to infiltration by foreign code.

Comments are closed.