AI‑Powered Attacks and Deepfakes: New Threats for Schools

CyberNut
July 23, 2025
5 min read

AI‑Powered Attacks and Deepfakes: New Threats for Schools

Artificial intelligence (AI) is transforming classrooms through personalised learning and adaptive assessment. Unfortunately, criminals are also harnessing AI to supercharge their attacks. Security researchers report that generative‑AI tools can craft grammatically correct phishing emails at scale, making it harder for recipients to spot anomalies. AI algorithms can also create polymorphic malware, which continually changes its code to evade detection. This means even advanced antivirus tools may fail to identify infections.

One emerging danger is AI‑assisted deepfake audio and video. Criminals can record a short sample of a superintendent’s voice—perhaps from a board meeting webcast—and then generate a realistic recording of that individual asking a finance officer to wire funds for an “urgent emergency.” In 2024, multiple U.S. school districts reported receiving deepfake phone calls purporting to be from administrators or vendors. Some nearly transferred money before verifying the request.

The scale of the problem is growing rapidly. A 2025 industry survey found that 63 % of cybersecurity leaders are worried about AI‑generated deepfakes, yet just 0.1 % of the general public can detect them. Deepfake incidents increased 19 % in the first quarter of 2025 compared with all of 2024, and 72 % of enterprises are concerned about the risks. Meanwhile, 98 % of deepfake videos are pornographic, raising the risk that students or teachers could be targeted with harassment or extortion. These statistics show that deepfakes are no longer a futuristic threat but a present danger.

To defend against AI‑powered attacks, schools should implement multi‑layered strategies. Training is critical: staff must learn to distrust unexpected requests for money or sensitive data, even if the request appears to come from a known voice or number. CyberNut’s platform can simulate spear‑phishing and vishing (voice phishing) scenarios, teaching employees to verify unusual requests through secondary channels. Students should also be taught media literacy—understanding that audio or video content can be manipulated and learning to look for signs of authenticity.

Technical controls matter too. Voice‑verification systems can detect synthetic audio by analysing frequency patterns. Email‑security tools using AI can identify unusual writing styles or suspicious attachments. Districts should ensure that funds transfers require approval by at least two people and that vendor payment instructions are verified out‑of‑band. Policy frameworks should include guidelines for using AI responsibly, addressing not only cyber risks but also ethical considerations in the classroom.

As AI becomes more accessible, deepfakes and AI‑driven phishing will likely proliferate. By fostering critical‑thinking skills and implementing robust controls, schools can reduce the risk of falling victim to these sophisticated scams.

Sources: [1] [2]

CyberNut
July 23, 2025