Toggle light / dark theme

Natural language processing continues to find its way into unexpected corners. This time, it’s phishing emails. In a small study, researchers found that they could use the deep learning language model GPT-3, along with other AI-as-a-service platforms, to significantly lower the barrier to entry for crafting spearphishing campaigns at a massive scale.

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That’s where NLP may come in surprisingly handy.

At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.

“I am used to being harassed online. But this was different,” she added. “It was as if someone had entered my home, my bedroom, my bathroom. I felt so unsafe and traumatized.”

Oueiss is one of several high-profile female journalists and activists who have allegedly been targeted and harassed by authoritarian regimes in the Middle East through hack-and-leak attacks using the Pegasus spyware, created by Israeli surveillance technology company NSO Group. The spyware transforms a phone into a surveillance device, activating microphones and cameras and exporting files without a user knowing.