The battle between hackers and cybersecurity experts has officially entered the era of AI. Microsoft recently detected and blocked a new phishing campaign that utilized AI-generated code to conceal its malicious payload, a clear indication that attackers are becoming increasingly creative with their tools.
The attack was designed to steal login credentials. It started with an email sent from a hacked small business account. The email contained an attachment that appeared to be a PDF but was actually a special type of image file (an SVG) containing hidden code. When a user opened the file, it would redirect them to a fake sign-in page designed to harvest their username and password.
What made this attack unique was how the code was hidden. Instead of complex encryption, the script cleverly hid its malicious payload by turning normal business-related words into code.
So how did Microsoft know it was AI? Their own AI security tool, Security Copilot, spotted the tell-tale signs. The code was too perfect and polished. It used unusually long, descriptive names for things and had repetitive patterns—all hallmarks of something written by a machine, not a human.
Ultimately, the attack was small, easily blocked, and primarily targeted U.S. organizations. However, the incident serves as a clear warning sign. It indicates that hackers are actively experimenting with AI to develop more sophisticated and convincing attacks. At the same time, it proves that security companies are using their own AI tools to stay one step ahead in this evolving cat-and-mouse game.











