Date: October 30, 2025
Industry experts are sounding alarms over a controversial research paper produced by MIT Sloan School of Management’s Cybersecurity at MIT Sloan (CAMS) program in partnership with vendor Safe Security. The working paper asserted that “80.83% of ransomware attacks are AI-driven,” claiming that adversarial artificial intelligence is now automating entire attack sequences.
Critics argue that the report lacks transparent methodology, datasets or a clear definition of what constitutes “AI-enabled” ransomware. The paper lists nearly every major ransomware family — including some previously dismantled groups — as AI-powered, without supporting evidence.
Prominent security researcher Kevin Beaumont described the paper as “absolutely ridiculous,” noting that many ransomware incidents cited are not known to have used AI in the manner claimed.
Further concerns highlight the academic-vendor structure behind the research: the CAMS program is funded by corporate sponsors, and one of the report authors is affiliated with Safe Security, whose commercial platform promotes AI-driven cyber risk quantification. Some observers say the “MIT‐brand” lends undue credibility to what they consider marketing-driven findings.
By contrast, independent incident-data studies from organizations such as ENISA and the Verizon Data Breach Investigations Report provide a markedly different picture: while AI is increasingly used for tasks like phishing, social-engineering and reconnaissance, there is little evidence that ransomware operations at large are being orchestrated or executed by AI at the scale claimed.
Key takeaways:
As the cybersecurity community grapples with the hype around “AI-powered ransomware,” this episode underscores the importance of data transparency, independent verification, and avoiding vendor-driven narrative inflation in threat research.