Researchers have demonstrated a new way of attacking artificial intelligence computer vision systems, allowing them to control what the AI “sees.” The research shows that the new technique, called RisingAttacK, is effective at manipulating all of the most widely used AI computer vision systems.
At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system to control what the system sees, or does not see, in an image. For example, someone might manipulate an AI’s ability to detect traffic signals, pedestrians or other cars—which would cause problems for autonomous vehicles. Or a hacker could install code on an X-ray machine that causes an AI system to make inaccurate diagnoses.
“We wanted to find an effective way of hacking AI vision systems because these vision systems are often used in contexts that can affect human health and safety—from autonomous vehicles to health technologies to security applications,” says Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University.