Industrial control systems (ICS) operators have become increasingly frequent targets of cyberattacks. Firstly, some decision-makers assumed criminals would overlook these options, but many ICS manipulate critical processes, such as those connected to wastewater treatment plants and power grids. Additionally, ICS are often outdated or protected with insufficient security measures, making them comparatively easy to infiltrate.
Artificial intelligence elevates these risks because adversaries use the technology to broaden the impact of phishing attacks and raise overall success rates. This developing trend requires cybersecurity professionals to mitigate threats while staying aware of relevant emerging trends.
Cybersecurity experts have identified several reasons artificial intelligence has made phishing a larger issue, raising attack volumes and making attempts highly believable. What should decision-makers understand and incorporate into training modules for ICS operators?
AI has assisted criminals in scaling polymorphic phishing attacks, expanding the number of potential victims they can target. This method randomizes email elements such as subject lines and sender names to generate numerous similar messages only differing by a single detail quickly.
One analysis found at least one polymorphic characteristic in 76% of 2024 phishing attacks. Additionally, it indicated 52% of them come from compromised email accounts, and a quarter use fraudulent domains. The researchers warned that these AI-driven attempts can bypass traditional detection mechanisms due to features such as dynamic URLs and delivery method modifications.
Additionally, artificial intelligence can adapt to user responses, such as sending further attempts if users do not respond to the first one. These challenges require cybersecurity professionals to remain vigilant and adjust their defenses accordingly.
Spelling and grammar mistakes were once hallmarks of phishing emails. AI has made them less prevalent by helping scammers create believable, error-free messages. Large language models burst into popular culture via ChatGPT a few years ago, and it amazed people with how quickly it generated responses.
Some people created similar chatbots for malicious purposes. One makes fake dating profiles for romance scams. Others generate malware or find system vulnerabilities. These tools also let users target thousands of phishing email recipients in their native languages. That capability broadens criminals’ reach, potentially allowing them to do more damage in less time.
According to a July 2025 study of chief information security officers, 25% had experienced an AI-generated network attack within the past year. However, the report’s creators believe the actual number is higher because these incidents are difficult to detect without advanced metrics. Participants also said AI cybersecurity risks top their priority lists, deeming them more pressing than vulnerability management, data-loss prevention and third-party threats.
Cybersecurity professionals must update their threat-screening techniques, investing in updated platforms that evolve to match AI’s diversity. Additionally, staying informed about new attack methods will keep them appropriately aware.
The above trends make the threat landscape particularly challenging. What are the best steps to safeguard industrial control system operators and other potentially targeted parties?
Comprehensive worker education curricula covering the latest phishing threats and other cybersecurity risks improve preparedness. The content should provide actionable ways to keep accounts safer and reinforce how online security is everyone’s responsibility. For example, choosing passwords containing more than 16 characters without sequential letters and numbers makes them harder to crack with AI-driven tools.
Many companies also run advanced phishing simulations to learn which attack methods seem most believable and find the workers who engage with them. That approach reveals weak points to target in future training efforts.
Powerful email filtering solutions can keep risky content out of industrial control system operators’ inboxes, dramatically reducing phishing threats. These options also address other cybersecurity risks that could adversely affect corporate networks.
For example, opening an email may be enough to install a virus on a computer. Cybercriminals continually adapt their methods, making the former best practice of advising people not to download suspicious attachments insufficient.
Behavioral analysis tools also flag AI-powered phishing attempts by recognizing when people use communication platforms differently. Cybercriminals running business email compromise (BEC) attacks embrace familiarity, sometimes spending weeks learning how organizations’ most powerful employees behave and using that information to mimic them in highly orchestrated campaigns.
One 2024 study noted a 1,760% year-on-year rise in BEC attacks in 2023. The researchers mentioned AI as a driving factor behind these social engineering efforts because criminals can use the technology to develop new, increasingly sophisticated methods. However, artificial intelligence also recognizes unusual behaviors, supporting cybersecurity professionals’ defense plans that involve examining email activities.
AI has upended the threat landscape, requiring cybersecurity experts and all internet users to update their defense methods. These tips enable industrial control system operators and other frequently targeted professionals to stay safer.
Interested in reading more articles like this? Subscribe to the ISAGCA blog and receive regular emails with links to thought leadership, research and other insights from the OT cybersecurity community.