Menu

Blog

Feb 26, 2023

Can AI really be protected from text-based attacks?

Posted by in categories: internet, robotics/AI

When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to get it to profess love, threaten harm, defend the Holocaust and invent conspiracy theories. Can AI ever be protected from these malicious prompts?

What set it off is malicious prompt engineering, or when an AI, like Bing Chat, that uses text-based instructions — prompts — to accomplish tasks is tricked by malicious, adversarial prompts (e.g. to perform tasks that weren’t a part of its objective. Bing Chat wasn’t designed with the intention of writing neo-Nazi propaganda. But because it was trained on vast amounts of text from the internet — some of it toxic — it’s susceptible to falling into unfortunate patterns.

Adam Hyland, a Ph.D. student at the University of Washington’s Human Centered Design and Engineering program, compared prompt engineering to an escalation of privilege attack. With escalation of privilege, a hacker is able to access resources — memory, for example — normally restricted to them because an audit didn’t capture all possible exploits.

Comments are closed.